Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1472780 Details for
Bug 1597666
[Update] Support ODL L2 minor update with OSP
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
odl minor update - console output
odl minor update.txt (text/plain), 1.01 MB, created by
Noam Manos
on 2018-08-02 16:34:47 UTC
(
hide
)
Description:
odl minor update - console output
Filename:
MIME Type:
Creator:
Noam Manos
Created:
2018-08-02 16:34:47 UTC
Size:
1.01 MB
patch
obsolete
> >[stack@undercloud-0 ~]$ >[stack@undercloud-0 ~]$ ./minor_update.sh >Installed: /etc/yum.repos.d/rhos-release-rhel-7.5.repo >Installed: /etc/yum.repos.d/rhos-release-ceph-3.repo >Installed: /etc/yum.repos.d/rhos-release-ceph-osd-3.repo >Installed: /etc/yum.repos.d/rhos-release-13.repo ># rhos-release 13 -p 2018-07-30.2 >Installed: /etc/yum.repos.d/rhos-release-13.repo >Loaded plugins: search-disabled-repos >rhelosp-13.0-image-build-override | 2.9 kB 00:00:00 >rhelosp-13.0-optools-puddle | 1.2 kB 00:00:00 >rhelosp-13.0-puddle | 1.3 kB 00:00:00 >rhelosp-ceph-3.0-mon | 4.0 kB 00:00:00 >rhelosp-ceph-3.0-osd | 4.0 kB 00:00:00 >rhelosp-ceph-3.0-tools | 4.0 kB 00:00:00 >rhelosp-rhel-7.5-extras | 3.4 kB 00:00:00 >rhelosp-rhel-7.5-ha | 3.4 kB 00:00:00 >rhelosp-rhel-7.5-image-build-override | 2.9 kB 00:00:00 >rhelosp-rhel-7.5-server | 3.5 kB 00:00:00 >rhos-release | 2.9 kB 00:00:00 >rhos-release-extras | 2.9 kB 00:00:00 >rhelosp-13.0-puddle/x86_64/primary | 213 kB 00:00:00 >rhelosp-13.0-puddle 821/821 >Resolving Dependencies >--> Running transaction check >---> Package instack-undercloud.noarch 0:8.4.1-5.el7ost will be updated >---> Package instack-undercloud.noarch 0:8.4.3-3.el7ost will be an update >--> Finished Dependency Resolution > >Dependencies Resolved > >=============================================================================================================================================================================================================================================== > Package Arch Version Repository Size >=============================================================================================================================================================================================================================================== >Updating: > instack-undercloud noarch 8.4.3-3.el7ost rhelosp-13.0-puddle 95 k > >Transaction Summary >=============================================================================================================================================================================================================================================== >Upgrade 1 Package > >Total download size: 95 k >Downloading packages: >No Presto metadata available for rhelosp-13.0-puddle >instack-undercloud-8.4.3-3.el7ost.noarch.rpm | 95 kB 00:00:00 >Running transaction check >Running transaction test >Transaction test succeeded >Running transaction > Updating : instack-undercloud-8.4.3-3.el7ost.noarch 1/2 > Cleanup : instack-undercloud-8.4.1-5.el7ost.noarch 2/2 > Verifying : instack-undercloud-8.4.3-3.el7ost.noarch 1/2 > Verifying : instack-undercloud-8.4.1-5.el7ost.noarch 2/2 > >Updated: > instack-undercloud.noarch 0:8.4.3-3.el7ost > >Complete! >2018-08-02 10:42:57,578 INFO: Stopping OpenStack and related services > >Broadcast message from systemd-journald@undercloud-0.redhat.local (Thu 2018-08-02 10:43:06 EDT): > >haproxy[24084]: proxy glance_api has no server available! > > >Message from syslogd@undercloud-0 at Aug 2 10:43:06 ... > haproxy[24084]:proxy glance_api has no server available! > >Broadcast message from systemd-journald@undercloud-0.redhat.local (Thu 2018-08-02 10:43:06 EDT): > >haproxy[24084]: proxy swift_proxy_server has no server available! > > >Message from syslogd@undercloud-0 at Aug 2 10:43:06 ... > haproxy[24084]:proxy swift_proxy_server has no server available! > >Broadcast message from systemd-journald@undercloud-0.redhat.local (Thu 2018-08-02 10:43:06 EDT): > >haproxy[24084]: proxy ironic-inspector has no server available! > > >Broadcast message from systemd-journald@undercloud-0.redhat.local (Thu 2018-08-02 10:43:06 EDT): > >haproxy[24084]: proxy zaqar_ws has no server available! > > >Broadcast message from systemd-journald@undercloud-0.redhat.local (Thu 2018-08-02 10:43:06 EDT): > >haproxy[24084]: proxy ui has no server available! > > >Message from syslogd@undercloud-0 at Aug 2 10:43:06 ... > haproxy[24084]:proxy ironic-inspector has no server available! > >Message from syslogd@undercloud-0 at Aug 2 10:43:06 ... > haproxy[24084]:proxy zaqar_ws has no server available! > >Message from syslogd@undercloud-0 at Aug 2 10:43:06 ... > haproxy[24084]:proxy ui has no server available! > >Broadcast message from systemd-journald@undercloud-0.redhat.local (Thu 2018-08-02 10:43:06 EDT): > >haproxy[24084]: proxy nova_metadata has no server available! > > >Message from syslogd@undercloud-0 at Aug 2 10:43:06 ... > haproxy[24084]:proxy nova_metadata has no server available! > >Broadcast message from systemd-journald@undercloud-0.redhat.local (Thu 2018-08-02 10:43:07 EDT): > >haproxy[24084]: proxy ironic has no server available! > > >Broadcast message from systemd-journald@undercloud-0.redhat.local (Thu 2018-08-02 10:43:07 EDT): > >haproxy[24084]: proxy heat_api has no server available! > > >Message from syslogd@undercloud-0 at Aug 2 10:43:07 ... > haproxy[24084]:proxy ironic has no server available! > >Message from syslogd@undercloud-0 at Aug 2 10:43:07 ... > haproxy[24084]:proxy heat_api has no server available! > >Broadcast message from systemd-journald@undercloud-0.redhat.local (Thu 2018-08-02 10:43:07 EDT): > >haproxy[24084]: proxy nova_placement has no server available! > > >Message from syslogd@undercloud-0 at Aug 2 10:43:07 ... > haproxy[24084]:proxy nova_placement has no server available! > >Broadcast message from systemd-journald@undercloud-0.redhat.local (Thu 2018-08-02 10:43:07 EDT): > >haproxy[24084]: proxy keystone_public has no server available! > > >Message from syslogd@undercloud-0 at Aug 2 10:43:07 ... > haproxy[24084]:proxy keystone_public has no server available! > >Broadcast message from systemd-journald@undercloud-0.redhat.local (Thu 2018-08-02 10:43:07 EDT): > >haproxy[24084]: proxy zaqar_api has no server available! > > >Message from syslogd@undercloud-0 at Aug 2 10:43:07 ... > haproxy[24084]:proxy zaqar_api has no server available! > >Broadcast message from systemd-journald@undercloud-0.redhat.local (Thu 2018-08-02 10:43:07 EDT): > >haproxy[24084]: proxy mistral has no server available! > > >Message from syslogd@undercloud-0 at Aug 2 10:43:07 ... > haproxy[24084]:proxy mistral has no server available! > >Broadcast message from systemd-journald@undercloud-0.redhat.local (Thu 2018-08-02 10:43:07 EDT): > >haproxy[24084]: proxy keystone_admin has no server available! > > >Message from syslogd@undercloud-0 at Aug 2 10:43:07 ... > haproxy[24084]:proxy keystone_admin has no server available! > >Broadcast message from systemd-journald@undercloud-0.redhat.local (Thu 2018-08-02 10:43:07 EDT): > >haproxy[24084]: proxy nova_osapi has no server available! > > >Message from syslogd@undercloud-0 at Aug 2 10:43:07 ... > haproxy[24084]:proxy nova_osapi has no server available! > >Broadcast message from systemd-journald@undercloud-0.redhat.local (Thu 2018-08-02 10:43:15 EDT): > >haproxy[24084]: proxy neutron has no server available! > > >Message from syslogd@undercloud-0 at Aug 2 10:43:15 ... > haproxy[24084]:proxy neutron has no server available! >2018-08-02 10:43:16,548 INFO: Services stopped successfully >2018-08-02 10:43:16,549 INFO: Running Nova online data migration >2018-08-02 10:43:20,954 INFO: Nova online data migration completed >2018-08-02 10:43:20,954 INFO: Installing Ansible Pacemaker module >2018-08-02 10:43:21,233 INFO: Loaded plugins: search-disabled-repos >2018-08-02 10:43:21,813 INFO: Resolving Dependencies >2018-08-02 10:43:21,813 INFO: --> Running transaction check >2018-08-02 10:43:21,814 INFO: ---> Package ansible-pacemaker.noarch 0:1.0.4-0.20180220234310.0e4d7c0.el7ost will be installed >2018-08-02 10:43:22,317 INFO: --> Finished Dependency Resolution >2018-08-02 10:43:22,607 INFO: >2018-08-02 10:43:22,607 INFO: Dependencies Resolved >2018-08-02 10:43:22,609 INFO: >2018-08-02 10:43:22,609 INFO: ================================================================================ >2018-08-02 10:43:22,610 INFO: Package Arch Version Repository Size >2018-08-02 10:43:22,610 INFO: ================================================================================ >2018-08-02 10:43:22,610 INFO: Installing: >2018-08-02 10:43:22,610 INFO: ansible-pacemaker >2018-08-02 10:43:22,611 INFO: noarch 1.0.4-0.20180220234310.0e4d7c0.el7ost rhelosp-13.0-puddle 20 k >2018-08-02 10:43:22,611 INFO: >2018-08-02 10:43:22,611 INFO: Transaction Summary >2018-08-02 10:43:22,611 INFO: ================================================================================ >2018-08-02 10:43:22,612 INFO: Install 1 Package >2018-08-02 10:43:22,612 INFO: >2018-08-02 10:43:22,612 INFO: Total download size: 20 k >2018-08-02 10:43:22,612 INFO: Installed size: 56 k >2018-08-02 10:43:22,612 INFO: Downloading packages: >2018-08-02 10:43:23,167 INFO: Running transaction check >2018-08-02 10:43:23,188 INFO: Running transaction test >2018-08-02 10:43:23,258 INFO: Transaction test succeeded >2018-08-02 10:43:23,259 INFO: Running transaction >2018-08-02 10:43:23,401 INFO: Installing : ansible-pacemaker-1.0.4-0.20180220234310.0e4d7c0.el7ost.no 1/1 >2018-08-02 10:43:23,598 INFO: Verifying : ansible-pacemaker-1.0.4-0.20180220234310.0e4d7c0.el7ost.no 1/1 >2018-08-02 10:43:23,598 INFO: >2018-08-02 10:43:23,599 INFO: Installed: >2018-08-02 10:43:23,599 INFO: ansible-pacemaker.noarch 0:1.0.4-0.20180220234310.0e4d7c0.el7ost >2018-08-02 10:43:23,599 INFO: >2018-08-02 10:43:23,599 INFO: Complete! >2018-08-02 10:43:23,643 INFO: Ansible pacemaker install completed successfully >2018-08-02 10:43:23,713 INFO: Current mariadb version is: 10.1.20 >2018-08-02 10:43:24,793 INFO: Available mariadb version is: (no new version available) >2018-08-02 10:43:24,793 INFO: Updating full system >2018-08-02 10:43:25,073 INFO: Loaded plugins: search-disabled-repos >2018-08-02 10:43:26,002 INFO: Resolving Dependencies >2018-08-02 10:43:26,002 INFO: --> Running transaction check >2018-08-02 10:43:26,006 INFO: ---> Package ansible.noarch 0:2.4.3.0-1.el7ae will be updated >2018-08-02 10:43:26,033 INFO: ---> Package ansible.noarch 0:2.4.6.0-1.el7ae will be an update >2018-08-02 10:43:26,039 INFO: ---> Package diskimage-builder.noarch 0:2.13.0-1.el7ost will be updated >2018-08-02 10:43:26,043 INFO: ---> Package diskimage-builder.noarch 0:2.16.0-1.el7ost will be an update >2018-08-02 10:43:26,053 INFO: ---> Package lttng-ust.x86_64 0:2.4.1-4.el7cp will be updated >2018-08-02 10:43:26,054 INFO: ---> Package lttng-ust.x86_64 0:2.4.1-5.el7 will be an update >2018-08-02 10:43:26,061 INFO: ---> Package microcode_ctl.x86_64 2:2.1-29.2.el7_5 will be updated >2018-08-02 10:43:26,061 INFO: ---> Package microcode_ctl.x86_64 2:2.1-29.10.el7_5 will be an update >2018-08-02 10:43:26,484 INFO: ---> Package openstack-glance.noarch 1:16.0.1-2.el7ost will be updated >2018-08-02 10:43:26,485 INFO: ---> Package openstack-glance.noarch 1:16.0.1-3.el7ost will be an update >2018-08-02 10:43:26,488 INFO: ---> Package openstack-heat-api.noarch 1:10.0.1-0.20180411125640.el7ost will be updated >2018-08-02 10:43:26,488 INFO: ---> Package openstack-heat-api.noarch 1:10.0.1-2.el7ost will be an update >2018-08-02 10:43:26,489 INFO: ---> Package openstack-heat-api-cfn.noarch 1:10.0.1-0.20180411125640.el7ost will be updated >2018-08-02 10:43:26,490 INFO: ---> Package openstack-heat-api-cfn.noarch 1:10.0.1-2.el7ost will be an update >2018-08-02 10:43:26,490 INFO: ---> Package openstack-heat-common.noarch 1:10.0.1-0.20180411125640.el7ost will be updated >2018-08-02 10:43:26,491 INFO: ---> Package openstack-heat-common.noarch 1:10.0.1-2.el7ost will be an update >2018-08-02 10:43:26,517 INFO: ---> Package openstack-heat-engine.noarch 1:10.0.1-0.20180411125640.el7ost will be updated >2018-08-02 10:43:26,518 INFO: ---> Package openstack-heat-engine.noarch 1:10.0.1-2.el7ost will be an update >2018-08-02 10:43:26,518 INFO: ---> Package openstack-ironic-api.noarch 1:10.1.2-4.el7ost will be updated >2018-08-02 10:43:26,518 INFO: ---> Package openstack-ironic-api.noarch 1:10.1.3-3.el7ost will be an update >2018-08-02 10:43:26,519 INFO: ---> Package openstack-ironic-common.noarch 1:10.1.2-4.el7ost will be updated >2018-08-02 10:43:26,520 INFO: ---> Package openstack-ironic-common.noarch 1:10.1.3-3.el7ost will be an update >2018-08-02 10:43:26,531 INFO: ---> Package openstack-ironic-conductor.noarch 1:10.1.2-4.el7ost will be updated >2018-08-02 10:43:26,532 INFO: ---> Package openstack-ironic-conductor.noarch 1:10.1.3-3.el7ost will be an update >2018-08-02 10:43:26,532 INFO: ---> Package openstack-ironic-inspector.noarch 0:7.2.1-0.20180409163360.el7ost will be updated >2018-08-02 10:43:26,533 INFO: ---> Package openstack-ironic-inspector.noarch 0:7.2.1-2.el7ost will be an update >2018-08-02 10:43:26,537 INFO: ---> Package openstack-mistral-api.noarch 0:6.0.2-1.el7ost will be updated >2018-08-02 10:43:26,537 INFO: ---> Package openstack-mistral-api.noarch 0:6.0.3-1.el7ost will be an update >2018-08-02 10:43:26,538 INFO: ---> Package openstack-mistral-common.noarch 0:6.0.2-1.el7ost will be updated >2018-08-02 10:43:26,539 INFO: ---> Package openstack-mistral-common.noarch 0:6.0.3-1.el7ost will be an update >2018-08-02 10:43:26,540 INFO: ---> Package openstack-mistral-engine.noarch 0:6.0.2-1.el7ost will be updated >2018-08-02 10:43:26,540 INFO: ---> Package openstack-mistral-engine.noarch 0:6.0.3-1.el7ost will be an update >2018-08-02 10:43:26,541 INFO: ---> Package openstack-mistral-executor.noarch 0:6.0.2-1.el7ost will be updated >2018-08-02 10:43:26,541 INFO: ---> Package openstack-mistral-executor.noarch 0:6.0.3-1.el7ost will be an update >2018-08-02 10:43:26,541 INFO: ---> Package openstack-neutron.noarch 1:12.0.2-0.20180421011364.0ec54fd.el7ost will be updated >2018-08-02 10:43:26,542 INFO: ---> Package openstack-neutron.noarch 1:12.0.3-2.el7ost will be an update >2018-08-02 10:43:26,549 INFO: ---> Package openstack-neutron-common.noarch 1:12.0.2-0.20180421011364.0ec54fd.el7ost will be updated >2018-08-02 10:43:26,550 INFO: ---> Package openstack-neutron-common.noarch 1:12.0.3-2.el7ost will be an update >2018-08-02 10:43:26,553 INFO: ---> Package openstack-neutron-ml2.noarch 1:12.0.2-0.20180421011364.0ec54fd.el7ost will be updated >2018-08-02 10:43:26,553 INFO: ---> Package openstack-neutron-ml2.noarch 1:12.0.3-2.el7ost will be an update >2018-08-02 10:43:26,554 INFO: ---> Package openstack-neutron-openvswitch.noarch 1:12.0.2-0.20180421011364.0ec54fd.el7ost will be updated >2018-08-02 10:43:26,554 INFO: ---> Package openstack-neutron-openvswitch.noarch 1:12.0.3-2.el7ost will be an update >2018-08-02 10:43:26,556 INFO: ---> Package openstack-nova-api.noarch 1:17.0.3-0.20180420001141.el7ost will be updated >2018-08-02 10:43:26,556 INFO: ---> Package openstack-nova-api.noarch 1:17.0.5-2.d7864fbgit.el7ost will be an update >2018-08-02 10:43:26,557 INFO: ---> Package openstack-nova-common.noarch 1:17.0.3-0.20180420001141.el7ost will be updated >2018-08-02 10:43:26,559 INFO: ---> Package openstack-nova-common.noarch 1:17.0.5-2.d7864fbgit.el7ost will be an update >2018-08-02 10:43:26,561 INFO: ---> Package openstack-nova-compute.noarch 1:17.0.3-0.20180420001141.el7ost will be updated >2018-08-02 10:43:26,561 INFO: ---> Package openstack-nova-compute.noarch 1:17.0.5-2.d7864fbgit.el7ost will be an update >2018-08-02 10:43:26,572 INFO: ---> Package openstack-nova-conductor.noarch 1:17.0.3-0.20180420001141.el7ost will be updated >2018-08-02 10:43:26,573 INFO: ---> Package openstack-nova-conductor.noarch 1:17.0.5-2.d7864fbgit.el7ost will be an update >2018-08-02 10:43:26,573 INFO: ---> Package openstack-nova-placement-api.noarch 1:17.0.3-0.20180420001141.el7ost will be updated >2018-08-02 10:43:26,573 INFO: ---> Package openstack-nova-placement-api.noarch 1:17.0.5-2.d7864fbgit.el7ost will be an update >2018-08-02 10:43:26,575 INFO: ---> Package openstack-nova-scheduler.noarch 1:17.0.3-0.20180420001141.el7ost will be updated >2018-08-02 10:43:26,575 INFO: ---> Package openstack-nova-scheduler.noarch 1:17.0.5-2.d7864fbgit.el7ost will be an update >2018-08-02 10:43:26,575 INFO: ---> Package openstack-selinux.noarch 0:0.8.14-12.el7ost will be updated >2018-08-02 10:43:26,576 INFO: ---> Package openstack-selinux.noarch 0:0.8.14-14.el7ost will be an update >2018-08-02 10:43:26,581 INFO: ---> Package openstack-tripleo-common.noarch 0:8.6.1-23.el7ost will be updated >2018-08-02 10:43:26,582 INFO: ---> Package openstack-tripleo-common.noarch 0:8.6.3-5.el7ost will be an update >2018-08-02 10:43:26,588 INFO: ---> Package openstack-tripleo-common-containers.noarch 0:8.6.1-23.el7ost will be updated >2018-08-02 10:43:26,588 INFO: ---> Package openstack-tripleo-common-containers.noarch 0:8.6.3-5.el7ost will be an update >2018-08-02 10:43:26,589 INFO: ---> Package openstack-tripleo-heat-templates.noarch 0:8.0.2-43.el7ost will be updated >2018-08-02 10:43:26,589 INFO: ---> Package openstack-tripleo-heat-templates.noarch 0:8.0.4-4.el7ost will be an update >2018-08-02 10:43:26,590 INFO: ---> Package openstack-tripleo-puppet-elements.noarch 0:8.0.0-2.el7ost will be updated >2018-08-02 10:43:26,590 INFO: ---> Package openstack-tripleo-puppet-elements.noarch 0:8.0.1-1.el7ost will be an update >2018-08-02 10:43:26,590 INFO: ---> Package openstack-tripleo-ui.noarch 0:8.3.1-3.el7ost will be updated >2018-08-02 10:43:26,591 INFO: ---> Package openstack-tripleo-ui.noarch 0:8.3.2-1.el7ost will be an update >2018-08-02 10:43:26,591 INFO: ---> Package openstack-tripleo-validations.noarch 0:8.4.1-5.el7ost will be updated >2018-08-02 10:43:26,591 INFO: ---> Package openstack-tripleo-validations.noarch 0:8.4.2-1.el7ost will be an update >2018-08-02 10:43:26,592 INFO: ---> Package openvswitch.x86_64 0:2.9.0-19.el7fdp.1 will be updated >2018-08-02 10:43:26,592 INFO: ---> Package openvswitch.x86_64 0:2.9.0-54.el7fdp will be an update >2018-08-02 10:43:26,601 INFO: --> Processing Dependency: openvswitch-selinux-extra-policy for package: openvswitch-2.9.0-54.el7fdp.x86_64 >2018-08-02 10:43:26,605 INFO: ---> Package puppet-cinder.noarch 0:12.4.1-0.20180329071637.4011a82.el7ost will be updated >2018-08-02 10:43:26,607 INFO: ---> Package puppet-cinder.noarch 0:12.4.1-0.20180628102250.641e036.el7ost will be an update >2018-08-02 10:43:26,610 INFO: ---> Package puppet-glance.noarch 0:12.5.0-2.el7ost will be updated >2018-08-02 10:43:26,612 INFO: ---> Package puppet-glance.noarch 0:12.5.0-3.el7ost will be an update >2018-08-02 10:43:26,612 INFO: ---> Package puppet-keystone.noarch 0:12.4.0-0.20180329034741.b6d2197.el7ost will be updated >2018-08-02 10:43:26,617 INFO: ---> Package puppet-keystone.noarch 0:12.4.0-2.el7ost will be an update >2018-08-02 10:43:26,618 INFO: ---> Package puppet-manila.noarch 0:12.4.0-0.20180329035214.6c18418.el7ost will be updated >2018-08-02 10:43:26,619 INFO: ---> Package puppet-manila.noarch 0:12.4.0-2.el7ost will be an update >2018-08-02 10:43:26,619 INFO: ---> Package puppet-module-data.noarch 0:0.5.1-0.20180215133437.28dafce.el7ost will be updated >2018-08-02 10:43:26,620 INFO: ---> Package puppet-module-data.noarch 0:0.5.1-1.28dafcegit.el7ost will be an update >2018-08-02 10:43:26,620 INFO: ---> Package puppet-n1k-vsm.noarch 0:0.0.2-0.20180220020853.91772fa.el7ost will be updated >2018-08-02 10:43:26,621 INFO: ---> Package puppet-n1k-vsm.noarch 0:0.0.2-1.91772fagit.el7ost will be an update >2018-08-02 10:43:26,621 INFO: ---> Package puppet-neutron.noarch 0:12.4.1-0.20180412211913.el7ost will be updated >2018-08-02 10:43:26,622 INFO: ---> Package puppet-neutron.noarch 0:12.4.1-1.3aa3109git.el7ost will be an update >2018-08-02 10:43:26,622 INFO: ---> Package puppet-nova.noarch 0:12.4.0-3.el7ost will be updated >2018-08-02 10:43:26,623 INFO: ---> Package puppet-nova.noarch 0:12.4.0-6.el7ost will be an update >2018-08-02 10:43:26,624 INFO: ---> Package puppet-ntp.noarch 0:4.2.0-0.20180220021230.93da3bd.el7ost will be updated >2018-08-02 10:43:26,624 INFO: ---> Package puppet-ntp.noarch 0:4.2.0-2.el7ost will be an update >2018-08-02 10:43:26,625 INFO: ---> Package puppet-opendaylight.noarch 0:8.1.2-2.38977efgit.el7ost will be updated >2018-08-02 10:43:26,625 INFO: ---> Package puppet-opendaylight.noarch 0:8.2.2-2.9126c8dgit.el7ost will be an update >2018-08-02 10:43:26,626 INFO: ---> Package puppet-pacemaker.noarch 0:0.7.2-0.20180423212248.fee47ee.el7ost will be updated >2018-08-02 10:43:26,627 INFO: ---> Package puppet-pacemaker.noarch 0:0.7.2-0.20180423212250.el7ost will be an update >2018-08-02 10:43:26,627 INFO: ---> Package puppet-swift.noarch 0:12.4.0-0.20180329044944.1a67002.el7ost will be updated >2018-08-02 10:43:26,628 INFO: ---> Package puppet-swift.noarch 0:12.4.0-2.el7ost will be an update >2018-08-02 10:43:26,629 INFO: ---> Package puppet-sysctl.noarch 0:0.0.11-0.20180215112742.65ffe83.el7ost will be updated >2018-08-02 10:43:26,630 INFO: ---> Package puppet-sysctl.noarch 0:0.0.11-1.el7ost will be an update >2018-08-02 10:43:26,630 INFO: ---> Package puppet-timezone.noarch 0:4.1.1-0.20180216002204.32aa9f5.el7ost will be updated >2018-08-02 10:43:26,631 INFO: ---> Package puppet-timezone.noarch 0:4.1.1-1.el7ost will be an update >2018-08-02 10:43:26,631 INFO: ---> Package puppet-tripleo.noarch 0:8.3.2-8.el7ost will be updated >2018-08-02 10:43:26,631 INFO: ---> Package puppet-tripleo.noarch 0:8.3.4-3.el7ost will be an update >2018-08-02 10:43:26,648 INFO: ---> Package python-UcsSdk.noarch 0:0.8.2.5-0.20180215132206.bf6b07d.el7ost will be updated >2018-08-02 10:43:26,649 INFO: ---> Package python-UcsSdk.noarch 0:0.8.2.5-1.el7ost will be an update >2018-08-02 10:43:26,649 INFO: ---> Package python-amqp.noarch 0:2.1.4-2.el7ost will be obsoleted >2018-08-02 10:43:26,650 INFO: ---> Package python-glance.noarch 1:16.0.1-2.el7ost will be updated >2018-08-02 10:43:26,650 INFO: ---> Package python-glance.noarch 1:16.0.1-3.el7ost will be an update >2018-08-02 10:43:26,658 INFO: ---> Package python-mistral.noarch 0:6.0.2-1.el7ost will be updated >2018-08-02 10:43:26,658 INFO: ---> Package python-mistral.noarch 0:6.0.3-1.el7ost will be an update >2018-08-02 10:43:26,662 INFO: ---> Package python-neutron.noarch 1:12.0.2-0.20180421011364.0ec54fd.el7ost will be updated >2018-08-02 10:43:26,663 INFO: ---> Package python-neutron.noarch 1:12.0.3-2.el7ost will be an update >2018-08-02 10:43:26,668 INFO: ---> Package python-nova.noarch 1:17.0.3-0.20180420001141.el7ost will be updated >2018-08-02 10:43:26,668 INFO: ---> Package python-nova.noarch 1:17.0.5-2.d7864fbgit.el7ost will be an update >2018-08-02 10:43:26,674 INFO: ---> Package python-novaclient.noarch 1:9.1.1-1.el7ost will be obsoleted >2018-08-02 10:43:26,678 INFO: ---> Package python-openvswitch.noarch 0:2.9.0-19.el7fdp.1 will be updated >2018-08-02 10:43:26,679 INFO: ---> Package python-openvswitch.noarch 0:2.9.0-54.el7fdp will be an update >2018-08-02 10:43:26,679 INFO: ---> Package python-oslo-concurrency-lang.noarch 0:3.25.0-1.el7ost will be updated >2018-08-02 10:43:26,680 INFO: ---> Package python-oslo-concurrency-lang.noarch 0:3.25.1-1.el7ost will be an update >2018-08-02 10:43:26,680 INFO: ---> Package python-oslo-db-lang.noarch 0:4.33.0-2.el7ost will be updated >2018-08-02 10:43:26,680 INFO: ---> Package python-oslo-db-lang.noarch 0:4.33.1-1.el7ost will be an update >2018-08-02 10:43:26,681 INFO: ---> Package python-oslo-utils-lang.noarch 0:3.35.0-1.el7ost will be updated >2018-08-02 10:43:26,681 INFO: ---> Package python-oslo-utils-lang.noarch 0:3.35.1-1.el7ost will be an update >2018-08-02 10:43:26,681 INFO: ---> Package python-oslo-versionedobjects-lang.noarch 0:1.31.2-1.el7ost will be updated >2018-08-02 10:43:26,681 INFO: ---> Package python-oslo-versionedobjects-lang.noarch 0:1.31.3-1.el7ost will be an update >2018-08-02 10:43:26,682 INFO: ---> Package python-tripleoclient.noarch 0:9.2.1-13.el7ost will be updated >2018-08-02 10:43:26,682 INFO: ---> Package python-tripleoclient.noarch 0:9.2.3-2.el7ost will be an update >2018-08-02 10:43:26,686 INFO: ---> Package python2-amqp.noarch 0:2.3.2-3.el7ost will be obsoleting >2018-08-02 10:43:26,686 INFO: ---> Package python2-ironicclient.noarch 0:2.2.0-1.el7ost will be updated >2018-08-02 10:43:26,688 INFO: ---> Package python2-ironicclient.noarch 0:2.2.1-1.el7ost will be an update >2018-08-02 10:43:26,689 INFO: ---> Package python2-magnumclient.noarch 0:2.9.0-1.el7ost will be updated >2018-08-02 10:43:26,690 INFO: ---> Package python2-magnumclient.noarch 0:2.9.1-1.el7ost will be an update >2018-08-02 10:43:26,691 INFO: ---> Package python2-neutron-tests-tempest.noarch 0:0.0.1-0.20180419105837.f33d59b.el7ost will be updated >2018-08-02 10:43:26,692 INFO: ---> Package python2-neutron-tests-tempest.noarch 0:0.0.1-0.20180425142843.02a5e2b.el7ost will be an update >2018-08-02 10:43:26,695 INFO: ---> Package python2-novaclient.noarch 1:10.1.0-1.el7ost will be obsoleting >2018-08-02 10:43:26,696 INFO: ---> Package python2-os-brick.noarch 0:2.3.1-1.el7ost will be updated >2018-08-02 10:43:26,698 INFO: ---> Package python2-os-brick.noarch 0:2.3.2-1.el7ost will be an update >2018-08-02 10:43:26,699 INFO: ---> Package python2-oslo-concurrency.noarch 0:3.25.0-1.el7ost will be updated >2018-08-02 10:43:26,706 INFO: ---> Package python2-oslo-concurrency.noarch 0:3.25.1-1.el7ost will be an update >2018-08-02 10:43:26,707 INFO: ---> Package python2-oslo-db.noarch 0:4.33.0-2.el7ost will be updated >2018-08-02 10:43:26,712 INFO: ---> Package python2-oslo-db.noarch 0:4.33.1-1.el7ost will be an update >2018-08-02 10:43:26,714 INFO: ---> Package python2-oslo-utils.noarch 0:3.35.0-1.el7ost will be updated >2018-08-02 10:43:26,726 INFO: ---> Package python2-oslo-utils.noarch 0:3.35.1-1.el7ost will be an update >2018-08-02 10:43:26,727 INFO: ---> Package python2-oslo-versionedobjects.noarch 0:1.31.2-1.el7ost will be updated >2018-08-02 10:43:26,730 INFO: ---> Package python2-oslo-versionedobjects.noarch 0:1.31.3-1.el7ost will be an update >2018-08-02 10:43:26,731 INFO: ---> Package python2-tooz.noarch 0:1.60.0-1.el7ost will be updated >2018-08-02 10:43:26,733 INFO: ---> Package python2-tooz.noarch 0:1.60.1-1.el7ost will be an update >2018-08-02 10:43:26,735 INFO: ---> Package python2-wsme.noarch 0:0.9.2-0.20180219185555.9f84e4c.el7ost will be updated >2018-08-02 10:43:26,736 INFO: ---> Package python2-wsme.noarch 0:0.9.3-1.el7ost will be an update >2018-08-02 10:43:26,737 INFO: ---> Package yum-utils.noarch 0:1.1.31-45.el7 will be updated >2018-08-02 10:43:26,738 INFO: ---> Package yum-utils.noarch 0:1.1.31-46.el7_5 will be an update >2018-08-02 10:43:26,740 INFO: --> Running transaction check >2018-08-02 10:43:26,741 INFO: ---> Package openvswitch-selinux-extra-policy.noarch 0:1.0-5.el7fdp will be installed >2018-08-02 10:43:26,904 INFO: --> Finished Dependency Resolution >2018-08-02 10:43:27,197 INFO: >2018-08-02 10:43:27,197 INFO: Dependencies Resolved >2018-08-02 10:43:27,215 INFO: >2018-08-02 10:43:27,215 INFO: ================================================================================ >2018-08-02 10:43:27,215 INFO: Package Arch Version Repository Size >2018-08-02 10:43:27,216 INFO: ================================================================================ >2018-08-02 10:43:27,216 INFO: Installing: >2018-08-02 10:43:27,216 INFO: python2-amqp noarch 2.3.2-3.el7ost rhelosp-13.0-puddle 86 k >2018-08-02 10:43:27,216 INFO: replacing python-amqp.noarch 2.1.4-2.el7ost >2018-08-02 10:43:27,216 INFO: python2-novaclient noarch 1:10.1.0-1.el7ost rhelosp-13.0-puddle 202 k >2018-08-02 10:43:27,217 INFO: replacing python-novaclient.noarch 1:9.1.1-1.el7ost >2018-08-02 10:43:27,217 INFO: Updating: >2018-08-02 10:43:27,217 INFO: ansible noarch 2.4.6.0-1.el7ae rhelosp-13.0-puddle 7.6 M >2018-08-02 10:43:27,217 INFO: diskimage-builder noarch 2.16.0-1.el7ost rhelosp-13.0-puddle 547 k >2018-08-02 10:43:27,218 INFO: lttng-ust x86_64 2.4.1-5.el7 rhelosp-13.0-puddle 175 k >2018-08-02 10:43:27,218 INFO: microcode_ctl x86_64 2:2.1-29.10.el7_5 rhelosp-rhel-7.5-server >2018-08-02 10:43:27,218 INFO: 1.2 M >2018-08-02 10:43:27,218 INFO: openstack-glance noarch 1:16.0.1-3.el7ost rhelosp-13.0-puddle 76 k >2018-08-02 10:43:27,219 INFO: openstack-heat-api noarch 1:10.0.1-2.el7ost rhelosp-13.0-puddle 10 k >2018-08-02 10:43:27,219 INFO: openstack-heat-api-cfn noarch 1:10.0.1-2.el7ost rhelosp-13.0-puddle 10 k >2018-08-02 10:43:27,219 INFO: openstack-heat-common noarch 1:10.0.1-2.el7ost rhelosp-13.0-puddle 1.7 M >2018-08-02 10:43:27,219 INFO: openstack-heat-engine noarch 1:10.0.1-2.el7ost rhelosp-13.0-puddle 9.3 k >2018-08-02 10:43:27,220 INFO: openstack-ironic-api noarch 1:10.1.3-3.el7ost rhelosp-13.0-puddle 5.5 k >2018-08-02 10:43:27,220 INFO: openstack-ironic-common noarch 1:10.1.3-3.el7ost rhelosp-13.0-puddle 1.0 M >2018-08-02 10:43:27,220 INFO: openstack-ironic-conductor noarch 1:10.1.3-3.el7ost rhelosp-13.0-puddle 4.7 k >2018-08-02 10:43:27,220 INFO: openstack-ironic-inspector noarch 7.2.1-2.el7ost rhelosp-13.0-puddle 184 k >2018-08-02 10:43:27,220 INFO: openstack-mistral-api noarch 6.0.3-1.el7ost rhelosp-13.0-puddle 3.7 k >2018-08-02 10:43:27,221 INFO: openstack-mistral-common noarch 6.0.3-1.el7ost rhelosp-13.0-puddle 7.4 k >2018-08-02 10:43:27,221 INFO: openstack-mistral-engine noarch 6.0.3-1.el7ost rhelosp-13.0-puddle 3.8 k >2018-08-02 10:43:27,221 INFO: openstack-mistral-executor noarch 6.0.3-1.el7ost rhelosp-13.0-puddle 3.8 k >2018-08-02 10:43:27,221 INFO: openstack-neutron noarch 1:12.0.3-2.el7ost rhelosp-13.0-puddle 28 k >2018-08-02 10:43:27,222 INFO: openstack-neutron-common noarch 1:12.0.3-2.el7ost rhelosp-13.0-puddle 224 k >2018-08-02 10:43:27,222 INFO: openstack-neutron-ml2 noarch 1:12.0.3-2.el7ost rhelosp-13.0-puddle 14 k >2018-08-02 10:43:27,222 INFO: openstack-neutron-openvswitch >2018-08-02 10:43:27,222 INFO: noarch 1:12.0.3-2.el7ost rhelosp-13.0-puddle 17 k >2018-08-02 10:43:27,222 INFO: openstack-nova-api noarch 1:17.0.5-2.d7864fbgit.el7ost >2018-08-02 10:43:27,223 INFO: rhelosp-13.0-puddle 8.9 k >2018-08-02 10:43:27,223 INFO: openstack-nova-common noarch 1:17.0.5-2.d7864fbgit.el7ost >2018-08-02 10:43:27,223 INFO: rhelosp-13.0-puddle 295 k >2018-08-02 10:43:27,223 INFO: openstack-nova-compute noarch 1:17.0.5-2.d7864fbgit.el7ost >2018-08-02 10:43:27,224 INFO: rhelosp-13.0-puddle 8.9 k >2018-08-02 10:43:27,224 INFO: openstack-nova-conductor noarch 1:17.0.5-2.d7864fbgit.el7ost >2018-08-02 10:43:27,224 INFO: rhelosp-13.0-puddle 6.4 k >2018-08-02 10:43:27,225 INFO: openstack-nova-placement-api >2018-08-02 10:43:27,225 INFO: noarch 1:17.0.5-2.d7864fbgit.el7ost >2018-08-02 10:43:27,225 INFO: rhelosp-13.0-puddle 6.7 k >2018-08-02 10:43:27,226 INFO: openstack-nova-scheduler noarch 1:17.0.5-2.d7864fbgit.el7ost >2018-08-02 10:43:27,226 INFO: rhelosp-13.0-puddle 6.4 k >2018-08-02 10:43:27,226 INFO: openstack-selinux noarch 0.8.14-14.el7ost rhelosp-13.0-puddle 167 k >2018-08-02 10:43:27,226 INFO: openstack-tripleo-common noarch 8.6.3-5.el7ost rhelosp-13.0-puddle 274 k >2018-08-02 10:43:27,227 INFO: openstack-tripleo-common-containers >2018-08-02 10:43:27,227 INFO: noarch 8.6.3-5.el7ost rhelosp-13.0-puddle 13 k >2018-08-02 10:43:27,227 INFO: openstack-tripleo-heat-templates >2018-08-02 10:43:27,227 INFO: noarch 8.0.4-4.el7ost rhelosp-13.0-puddle 519 k >2018-08-02 10:43:27,227 INFO: openstack-tripleo-puppet-elements >2018-08-02 10:43:27,228 INFO: noarch 8.0.1-1.el7ost rhelosp-13.0-puddle 47 k >2018-08-02 10:43:27,228 INFO: openstack-tripleo-ui noarch 8.3.2-1.el7ost rhelosp-13.0-puddle 6.2 M >2018-08-02 10:43:27,228 INFO: openstack-tripleo-validations >2018-08-02 10:43:27,228 INFO: noarch 8.4.2-1.el7ost rhelosp-13.0-puddle 73 k >2018-08-02 10:43:27,229 INFO: openvswitch x86_64 2.9.0-54.el7fdp rhelosp-13.0-puddle 6.4 M >2018-08-02 10:43:27,229 INFO: puppet-cinder noarch 12.4.1-0.20180628102250.641e036.el7ost >2018-08-02 10:43:27,229 INFO: rhelosp-13.0-puddle 90 k >2018-08-02 10:43:27,229 INFO: puppet-glance noarch 12.5.0-3.el7ost rhelosp-13.0-puddle 72 k >2018-08-02 10:43:27,230 INFO: puppet-keystone noarch 12.4.0-2.el7ost rhelosp-13.0-puddle 114 k >2018-08-02 10:43:27,230 INFO: puppet-manila noarch 12.4.0-2.el7ost rhelosp-13.0-puddle 61 k >2018-08-02 10:43:27,230 INFO: puppet-module-data noarch 0.5.1-1.28dafcegit.el7ost >2018-08-02 10:43:27,230 INFO: rhelosp-13.0-puddle 7.3 k >2018-08-02 10:43:27,230 INFO: puppet-n1k-vsm noarch 0.0.2-1.91772fagit.el7ost >2018-08-02 10:43:27,231 INFO: rhelosp-13.0-puddle 17 k >2018-08-02 10:43:27,231 INFO: puppet-neutron noarch 12.4.1-1.3aa3109git.el7ost >2018-08-02 10:43:27,231 INFO: rhelosp-13.0-puddle 166 k >2018-08-02 10:43:27,231 INFO: puppet-nova noarch 12.4.0-6.el7ost rhelosp-13.0-puddle 150 k >2018-08-02 10:43:27,232 INFO: puppet-ntp noarch 4.2.0-2.el7ost rhelosp-13.0-puddle 24 k >2018-08-02 10:43:27,232 INFO: puppet-opendaylight noarch 8.2.2-2.9126c8dgit.el7ost >2018-08-02 10:43:27,232 INFO: rhelosp-13.0-puddle 28 k >2018-08-02 10:43:27,232 INFO: puppet-pacemaker noarch 0.7.2-0.20180423212250.el7ost >2018-08-02 10:43:27,233 INFO: rhelosp-13.0-puddle 149 k >2018-08-02 10:43:27,233 INFO: puppet-swift noarch 12.4.0-2.el7ost rhelosp-13.0-puddle 95 k >2018-08-02 10:43:27,233 INFO: puppet-sysctl noarch 0.0.11-1.el7ost rhelosp-13.0-puddle 8.6 k >2018-08-02 10:43:27,233 INFO: puppet-timezone noarch 4.1.1-1.el7ost rhelosp-13.0-puddle 13 k >2018-08-02 10:43:27,233 INFO: puppet-tripleo noarch 8.3.4-3.el7ost rhelosp-13.0-puddle 252 k >2018-08-02 10:43:27,234 INFO: python-UcsSdk noarch 0.8.2.5-1.el7ost rhelosp-13.0-puddle 2.9 M >2018-08-02 10:43:27,234 INFO: python-glance noarch 1:16.0.1-3.el7ost rhelosp-13.0-puddle 791 k >2018-08-02 10:43:27,234 INFO: python-mistral noarch 6.0.3-1.el7ost rhelosp-13.0-puddle 454 k >2018-08-02 10:43:27,235 INFO: python-neutron noarch 1:12.0.3-2.el7ost rhelosp-13.0-puddle 2.0 M >2018-08-02 10:43:27,235 INFO: python-nova noarch 1:17.0.5-2.d7864fbgit.el7ost >2018-08-02 10:43:27,235 INFO: rhelosp-13.0-puddle 3.5 M >2018-08-02 10:43:27,235 INFO: python-openvswitch noarch 2.9.0-54.el7fdp rhelosp-13.0-puddle 172 k >2018-08-02 10:43:27,235 INFO: python-oslo-concurrency-lang >2018-08-02 10:43:27,236 INFO: noarch 3.25.1-1.el7ost rhelosp-13.0-puddle 8.8 k >2018-08-02 10:43:27,236 INFO: python-oslo-db-lang noarch 4.33.1-1.el7ost rhelosp-13.0-puddle 8.6 k >2018-08-02 10:43:27,236 INFO: python-oslo-utils-lang noarch 3.35.1-1.el7ost rhelosp-13.0-puddle 8.0 k >2018-08-02 10:43:27,237 INFO: python-oslo-versionedobjects-lang >2018-08-02 10:43:27,237 INFO: noarch 1.31.3-1.el7ost rhelosp-13.0-puddle 7.5 k >2018-08-02 10:43:27,237 INFO: python-tripleoclient noarch 9.2.3-2.el7ost rhelosp-13.0-puddle 299 k >2018-08-02 10:43:27,238 INFO: python2-ironicclient noarch 2.2.1-1.el7ost rhelosp-13.0-puddle 390 k >2018-08-02 10:43:27,238 INFO: python2-magnumclient noarch 2.9.1-1.el7ost rhelosp-13.0-puddle 109 k >2018-08-02 10:43:27,238 INFO: python2-neutron-tests-tempest >2018-08-02 10:43:27,238 INFO: noarch 0.0.1-0.20180425142843.02a5e2b.el7ost >2018-08-02 10:43:27,239 INFO: rhelosp-13.0-puddle 240 k >2018-08-02 10:43:27,239 INFO: python2-os-brick noarch 2.3.2-1.el7ost rhelosp-13.0-puddle 1.1 M >2018-08-02 10:43:27,239 INFO: python2-oslo-concurrency noarch 3.25.1-1.el7ost rhelosp-13.0-puddle 35 k >2018-08-02 10:43:27,239 INFO: python2-oslo-db noarch 4.33.1-1.el7ost rhelosp-13.0-puddle 143 k >2018-08-02 10:43:27,240 INFO: python2-oslo-utils noarch 3.35.1-1.el7ost rhelosp-13.0-puddle 71 k >2018-08-02 10:43:27,240 INFO: python2-oslo-versionedobjects >2018-08-02 10:43:27,240 INFO: noarch 1.31.3-1.el7ost rhelosp-13.0-puddle 71 k >2018-08-02 10:43:27,240 INFO: python2-tooz noarch 1.60.1-1.el7ost rhelosp-13.0-puddle 96 k >2018-08-02 10:43:27,241 INFO: python2-wsme noarch 0.9.3-1.el7ost rhelosp-13.0-puddle 192 k >2018-08-02 10:43:27,241 INFO: yum-utils noarch 1.1.31-46.el7_5 rhelosp-rhel-7.5-server >2018-08-02 10:43:27,241 INFO: 120 k >2018-08-02 10:43:27,241 INFO: Installing for dependencies: >2018-08-02 10:43:27,242 INFO: openvswitch-selinux-extra-policy >2018-08-02 10:43:27,242 INFO: noarch 1.0-5.el7fdp rhelosp-13.0-puddle 7.3 k >2018-08-02 10:43:27,242 INFO: >2018-08-02 10:43:27,242 INFO: Transaction Summary >2018-08-02 10:43:27,242 INFO: ================================================================================ >2018-08-02 10:43:27,243 INFO: Install 2 Packages (+1 Dependent package) >2018-08-02 10:43:27,243 INFO: Upgrade 72 Packages >2018-08-02 10:43:27,243 INFO: >2018-08-02 10:43:27,243 INFO: Total download size: 41 M >2018-08-02 10:43:27,243 INFO: Downloading packages: >2018-08-02 10:43:27,243 INFO: No Presto metadata available for rhelosp-13.0-puddle >2018-08-02 10:43:27,243 INFO: No Presto metadata available for rhelosp-rhel-7.5-server >2018-08-02 10:43:45,330 INFO: -------------------------------------------------------------------------------- >2018-08-02 10:43:45,331 INFO: Total 2.3 MB/s | 41 MB 00:18 >2018-08-02 10:43:45,447 INFO: Running transaction check >2018-08-02 10:43:45,918 INFO: Running transaction test >2018-08-02 10:43:46,308 INFO: Transaction test succeeded >2018-08-02 10:43:46,309 INFO: Running transaction >2018-08-02 10:43:46,962 INFO: Updating : puppet-keystone-12.4.0-2.el7ost.noarch 1/149 >2018-08-02 10:43:46,984 INFO: Updating : puppet-glance-12.5.0-3.el7ost.noarch 2/149 >2018-08-02 10:43:47,150 INFO: Updating : puppet-sysctl-0.0.11-1.el7ost.noarch 3/149 >2018-08-02 10:43:47,303 INFO: Updating : python2-wsme-0.9.3-1.el7ost.noarch 4/149 >2018-08-02 10:43:47,548 INFO: Updating : puppet-cinder-12.4.1-0.20180628102250.641e036.el7ost.n 5/149 >2018-08-02 10:43:53,142 INFO: Updating : puppet-nova-12.4.0-6.el7ost.noarch 6/149 >2018-08-02 10:43:53,503 INFO: Updating : ansible-2.4.6.0-1.el7ae.noarch 7/149 >2018-08-02 10:43:53,622 INFO: Updating : puppet-neutron-12.4.1-1.3aa3109git.el7ost.noarch 8/149 >2018-08-02 10:43:53,776 INFO: Updating : puppet-manila-12.4.0-2.el7ost.noarch 9/149 >2018-08-02 10:43:53,793 INFO: Updating : puppet-swift-12.4.0-2.el7ost.noarch 10/149 >2018-08-02 10:43:53,816 INFO: Updating : python-oslo-db-lang-4.33.1-1.el7ost.noarch 11/149 >2018-08-02 10:43:53,830 INFO: Updating : puppet-timezone-4.1.1-1.el7ost.noarch 12/149 >2018-08-02 10:43:53,855 INFO: Updating : puppet-module-data-0.5.1-1.28dafcegit.el7ost.noarch 13/149 >2018-08-02 10:43:53,868 INFO: Updating : puppet-ntp-4.2.0-2.el7ost.noarch 14/149 >2018-08-02 10:43:53,886 INFO: Updating : openstack-tripleo-common-containers-8.6.3-5.el7ost.noa 15/149 >2018-08-02 10:43:53,897 INFO: Updating : puppet-n1k-vsm-0.0.2-1.91772fagit.el7ost.noarch 16/149 >2018-08-02 10:43:54,983 INFO: Updating : python-oslo-concurrency-lang-3.25.1-1.el7ost.noarch 17/149 >2018-08-02 10:43:55,027 INFO: Updating : python-UcsSdk-0.8.2.5-1.el7ost.noarch 18/149 >2018-08-02 10:47:40,649 INFO: Updating : openstack-selinux-0.8.14-14.el7ost.noarch 19/149 >2018-08-02 10:47:40,718 INFO: Updating : python-oslo-utils-lang-3.35.1-1.el7ost.noarch 20/149 >2018-08-02 10:47:40,762 INFO: Updating : python2-oslo-utils-3.35.1-1.el7ost.noarch 21/149 >2018-08-02 10:47:40,850 INFO: Updating : python2-oslo-concurrency-3.25.1-1.el7ost.noarch 22/149 >2018-08-02 10:47:40,997 INFO: Updating : python2-oslo-db-4.33.1-1.el7ost.noarch 23/149 >2018-08-02 10:47:41,316 INFO: Installing : 1:python2-novaclient-10.1.0-1.el7ost.noarch 24/149 >2018-08-02 10:47:41,409 INFO: Updating : python2-ironicclient-2.2.1-1.el7ost.noarch 25/149 >2018-08-02 10:47:41,675 INFO: Updating : python2-tooz-1.60.1-1.el7ost.noarch 26/149 >2018-08-02 10:47:42,010 INFO: Updating : openstack-tripleo-common-8.6.3-5.el7ost.noarch 27/149 >2018-08-02 10:47:42,125 INFO: Updating : python2-os-brick-2.3.2-1.el7ost.noarch 28/149 >2018-08-02 10:47:42,568 INFO: Updating : python2-magnumclient-2.9.1-1.el7ost.noarch 29/149 >2018-08-02 10:47:42,632 INFO: Updating : python-mistral-6.0.3-1.el7ost.noarch 30/149 >2018-08-02 10:47:43,204 INFO: Updating : openstack-mistral-common-6.0.3-1.el7ost.noarch 31/149 >2018-08-02 10:47:43,324 INFO: Updating : 1:python-glance-16.0.1-3.el7ost.noarch 32/149 >2018-08-02 10:47:43,364 INFO: Updating : python-openvswitch-2.9.0-54.el7fdp.noarch 33/149 >2018-08-02 10:47:43,610 INFO: Updating : puppet-opendaylight-8.2.2-2.9126c8dgit.el7ost.noarch 34/149 >2018-08-02 10:47:43,656 INFO: Updating : puppet-pacemaker-0.7.2-0.20180423212250.el7ost.noarch 35/149 >2018-08-02 10:48:10,124 INFO: Installing : openvswitch-selinux-extra-policy-1.0-5.el7fdp.noarch 36/149 >2018-08-02 10:48:10,161 INFO: Updating : openvswitch-2.9.0-54.el7fdp.x86_64 37/149 >2018-08-02 10:48:10,211 INFO: Updating : python-oslo-versionedobjects-lang-1.31.3-1.el7ost.noar 38/149 >2018-08-02 10:48:11,441 INFO: Updating : python2-oslo-versionedobjects-1.31.3-1.el7ost.noarch 39/149 >2018-08-02 10:48:12,192 INFO: Updating : 1:openstack-heat-common-10.0.1-2.el7ost.noarch 40/149 >2018-08-02 10:48:14,137 INFO: Updating : 1:openstack-ironic-common-10.1.3-3.el7ost.noarch 41/149 >2018-08-02 10:48:14,270 INFO: Updating : 1:python-neutron-12.0.3-2.el7ost.noarch 42/149 >2018-08-02 10:48:16,876 INFO: Updating : 1:openstack-neutron-common-12.0.3-2.el7ost.noarch 43/149 >2018-08-02 10:48:17,034 INFO: Updating : 1:python-nova-17.0.5-2.d7864fbgit.el7ost.noarch 44/149 >2018-08-02 10:48:17,042 INFO: Updating : 1:openstack-nova-common-17.0.5-2.d7864fbgit.el7ost.noa 45/149 >2018-08-02 10:48:17,042 INFO: warning: /etc/nova/nova.conf created as /etc/nova/nova.conf.rpmnew >2018-08-02 10:48:17,079 INFO: Updating : 1:openstack-nova-conductor-17.0.5-2.d7864fbgit.el7ost. 46/149 >2018-08-02 10:48:17,134 INFO: Updating : 1:openstack-nova-api-17.0.5-2.d7864fbgit.el7ost.noarch 47/149 >2018-08-02 10:48:17,157 INFO: Updating : 1:openstack-nova-compute-17.0.5-2.d7864fbgit.el7ost.no 48/149 >2018-08-02 10:48:17,165 INFO: Updating : 1:openstack-nova-placement-api-17.0.5-2.d7864fbgit.el7 49/149 >2018-08-02 10:48:17,194 INFO: Updating : 1:openstack-nova-scheduler-17.0.5-2.d7864fbgit.el7ost. 50/149 >2018-08-02 10:48:17,340 INFO: Updating : 1:openstack-neutron-openvswitch-12.0.3-2.el7ost.noarch 51/149 >2018-08-02 10:48:17,381 INFO: Updating : 1:openstack-neutron-ml2-12.0.3-2.el7ost.noarch 52/149 >2018-08-02 10:48:17,413 INFO: Updating : 1:openstack-neutron-12.0.3-2.el7ost.noarch 53/149 >2018-08-02 10:48:17,437 INFO: Updating : 1:openstack-ironic-api-10.1.3-3.el7ost.noarch 54/149 >2018-08-02 10:48:17,465 INFO: Updating : 1:openstack-ironic-conductor-10.1.3-3.el7ost.noarch 55/149 >2018-08-02 10:48:17,494 INFO: Updating : 1:openstack-heat-api-10.0.1-2.el7ost.noarch 56/149 >2018-08-02 10:48:17,520 INFO: Updating : 1:openstack-heat-api-cfn-10.0.1-2.el7ost.noarch 57/149 >2018-08-02 10:48:17,940 INFO: Updating : 1:openstack-heat-engine-10.0.1-2.el7ost.noarch 58/149 >2018-08-02 10:48:18,032 INFO: Updating : puppet-tripleo-8.3.4-3.el7ost.noarch 59/149 >2018-08-02 10:48:18,058 INFO: Updating : 1:openstack-glance-16.0.1-3.el7ost.noarch 60/149 >2018-08-02 10:48:18,081 INFO: Updating : openstack-mistral-engine-6.0.3-1.el7ost.noarch 61/149 >2018-08-02 10:48:18,103 INFO: Updating : openstack-mistral-executor-6.0.3-1.el7ost.noarch 62/149 >2018-08-02 10:48:18,377 INFO: Updating : openstack-mistral-api-6.0.3-1.el7ost.noarch 63/149 >2018-08-02 10:48:19,123 INFO: Updating : python-tripleoclient-9.2.3-2.el7ost.noarch 64/149 >2018-08-02 10:48:19,363 INFO: Updating : openstack-tripleo-heat-templates-8.0.4-4.el7ost.noarch 65/149 >2018-08-02 10:48:19,597 INFO: Updating : openstack-ironic-inspector-7.2.1-2.el7ost.noarch 66/149 >2018-08-02 10:48:19,701 INFO: Updating : python2-neutron-tests-tempest-0.0.1-0.20180425142843.0 67/149 >2018-08-02 10:48:20,937 INFO: Updating : openstack-tripleo-validations-8.4.2-1.el7ost.noarch 68/149 >2018-08-02 10:48:21,010 INFO: Updating : openstack-tripleo-ui-8.3.2-1.el7ost.noarch 69/149 >2018-08-02 10:48:21,473 INFO: Updating : yum-utils-1.1.31-46.el7_5.noarch 70/149 >2018-08-02 10:48:23,743 INFO: Updating : 2:microcode_ctl-2.1-29.10.el7_5.x86_64 71/149 >2018-08-02 10:48:23,845 INFO: Updating : diskimage-builder-2.16.0-1.el7ost.noarch 72/149 >2018-08-02 10:48:23,949 INFO: Updating : lttng-ust-2.4.1-5.el7.x86_64 73/149 >2018-08-02 10:48:24,057 INFO: Installing : python2-amqp-2.3.2-3.el7ost.noarch 74/149 >2018-08-02 10:48:24,083 INFO: Updating : openstack-tripleo-puppet-elements-8.0.1-1.el7ost.noarc 75/149 >2018-08-02 10:48:24,122 INFO: Cleanup : puppet-tripleo-8.3.2-8.el7ost.noarch 76/149 >2018-08-02 10:48:24,481 INFO: Cleanup : openstack-ironic-inspector-7.2.1-0.20180409163360.el7o 77/149 >2018-08-02 10:48:24,499 INFO: Cleanup : puppet-neutron-12.4.1-0.20180412211913.el7ost.noarch 78/149 >2018-08-02 10:48:24,534 INFO: Cleanup : puppet-nova-12.4.0-3.el7ost.noarch 79/149 >2018-08-02 10:48:24,546 INFO: Cleanup : 1:openstack-neutron-openvswitch-12.0.2-0.2018042101136 80/149 >2018-08-02 10:48:24,563 INFO: Cleanup : python-tripleoclient-9.2.1-13.el7ost.noarch 81/149 >2018-08-02 10:48:24,574 INFO: Cleanup : puppet-manila-12.4.0-0.20180329035214.6c18418.el7ost.n 82/149 >2018-08-02 10:48:24,585 INFO: Cleanup : puppet-glance-12.5.0-2.el7ost.noarch 83/149 >2018-08-02 10:48:24,597 INFO: Cleanup : puppet-cinder-12.4.1-0.20180329071637.4011a82.el7ost.n 84/149 >2018-08-02 10:48:24,624 INFO: Cleanup : puppet-swift-12.4.0-0.20180329044944.1a67002.el7ost.no 85/149 >2018-08-02 10:48:25,134 INFO: Cleanup : 1:openstack-neutron-12.0.2-0.20180421011364.0ec54fd.el 86/149 >2018-08-02 10:48:25,316 INFO: Cleanup : 1:openstack-ironic-conductor-10.1.2-4.el7ost.noarch 87/149 >2018-08-02 10:48:25,689 INFO: Cleanup : 1:openstack-glance-16.0.1-2.el7ost.noarch 88/149 >2018-08-02 10:48:25,730 INFO: Cleanup : 1:python-glance-16.0.1-2.el7ost.noarch 89/149 >2018-08-02 10:48:25,874 INFO: Cleanup : 1:openstack-nova-scheduler-17.0.3-0.20180420001141.el7 90/149 >2018-08-02 10:48:25,888 INFO: Cleanup : python2-neutron-tests-tempest-0.0.1-0.20180419105837.f 91/149 >2018-08-02 10:48:25,893 INFO: Cleanup : 1:openstack-nova-placement-api-17.0.3-0.20180420001141 92/149 >2018-08-02 10:48:25,901 INFO: Cleanup : 1:openstack-neutron-ml2-12.0.2-0.20180421011364.0ec54f 93/149 >2018-08-02 10:48:25,936 INFO: Cleanup : 1:openstack-neutron-common-12.0.2-0.20180421011364.0ec 94/149 >2018-08-02 10:48:25,994 INFO: Cleanup : 1:python-neutron-12.0.2-0.20180421011364.0ec54fd.el7os 95/149 >2018-08-02 10:48:26,015 INFO: Cleanup : openstack-tripleo-heat-templates-8.0.2-43.el7ost.noarc 96/149 >2018-08-02 10:48:26,044 INFO: Cleanup : openstack-tripleo-common-8.6.1-23.el7ost.noarch 97/149 >2018-08-02 10:48:26,178 INFO: Cleanup : openstack-mistral-api-6.0.2-1.el7ost.noarch 98/149 >2018-08-02 10:48:26,312 INFO: Cleanup : 1:openstack-nova-compute-17.0.3-0.20180420001141.el7os 99/149 >2018-08-02 10:48:26,443 INFO: Cleanup : openstack-mistral-executor-6.0.2-1.el7ost.noarch 100/149 >2018-08-02 10:48:26,596 INFO: Cleanup : 1:openstack-heat-engine-10.0.1-0.20180411125640.el7ost 101/149 >2018-08-02 10:48:26,748 INFO: Cleanup : 1:openstack-nova-api-17.0.3-0.20180420001141.el7ost.no 102/149 >2018-08-02 10:48:26,877 INFO: Cleanup : 1:openstack-heat-api-cfn-10.0.1-0.20180411125640.el7os 103/149 >2018-08-02 10:48:26,902 INFO: Cleanup : openstack-tripleo-validations-8.4.1-5.el7ost.noarch 104/149 >2018-08-02 10:48:27,030 INFO: Cleanup : 1:openstack-ironic-api-10.1.2-4.el7ost.noarch 105/149 >2018-08-02 10:48:27,065 INFO: Cleanup : 1:openstack-ironic-common-10.1.2-4.el7ost.noarch 106/149 >2018-08-02 10:48:27,208 INFO: Cleanup : 1:openstack-heat-api-10.0.1-0.20180411125640.el7ost.no 107/149 >2018-08-02 10:48:27,253 INFO: Cleanup : 1:openstack-heat-common-10.0.1-0.20180411125640.el7ost 108/149 >2018-08-02 10:48:27,381 INFO: Cleanup : 1:openstack-nova-conductor-17.0.3-0.20180420001141.el7 109/149 >2018-08-02 10:48:27,416 INFO: Cleanup : 1:openstack-nova-common-17.0.3-0.20180420001141.el7ost 110/149 >2018-08-02 10:48:27,452 INFO: Cleanup : 1:python-nova-17.0.3-0.20180420001141.el7ost.noarch 111/149 >2018-08-02 10:48:27,471 INFO: Cleanup : python2-oslo-versionedobjects-1.31.2-1.el7ost.noarch 112/149 >2018-08-02 10:48:27,499 INFO: Cleanup : python2-os-brick-2.3.1-1.el7ost.noarch 113/149 >2018-08-02 10:48:27,630 INFO: Cleanup : openstack-mistral-engine-6.0.2-1.el7ost.noarch 114/149 >2018-08-02 10:48:27,642 INFO: Cleanup : openstack-mistral-common-6.0.2-1.el7ost.noarch 115/149 >2018-08-02 10:48:27,659 INFO: Cleanup : python-mistral-6.0.2-1.el7ost.noarch 116/149 >2018-08-02 10:48:27,668 INFO: Cleanup : python2-oslo-concurrency-3.25.0-1.el7ost.noarch 117/149 >2018-08-02 10:48:27,681 INFO: Cleanup : python2-oslo-db-4.33.0-2.el7ost.noarch 118/149 >2018-08-02 10:48:27,697 INFO: Cleanup : python2-ironicclient-2.2.0-1.el7ost.noarch 119/149 >2018-08-02 10:48:27,711 INFO: Cleanup : python2-magnumclient-2.9.0-1.el7ost.noarch 120/149 >2018-08-02 10:48:27,722 INFO: Erasing : 1:python-novaclient-9.1.1-1.el7ost.noarch 121/149 >2018-08-02 10:48:27,732 INFO: Cleanup : python2-tooz-1.60.0-1.el7ost.noarch 122/149 >2018-08-02 10:48:27,743 INFO: Cleanup : python2-oslo-utils-3.35.0-1.el7ost.noarch 123/149 >2018-08-02 10:48:27,751 INFO: Cleanup : python-oslo-utils-lang-3.35.0-1.el7ost.noarch 124/149 >2018-08-02 10:48:27,757 INFO: Cleanup : python-oslo-db-lang-4.33.0-2.el7ost.noarch 125/149 >2018-08-02 10:48:27,766 INFO: Cleanup : python-oslo-concurrency-lang-3.25.0-1.el7ost.noarch 126/149 >2018-08-02 10:48:27,777 INFO: Cleanup : python2-wsme-0.9.2-0.20180219185555.9f84e4c.el7ost.noa 127/149 >2018-08-02 10:48:27,785 INFO: Cleanup : python-oslo-versionedobjects-lang-1.31.2-1.el7ost.noar 128/149 >2018-08-02 10:48:27,880 INFO: Cleanup : python-UcsSdk-0.8.2.5-0.20180215132206.bf6b07d.el7ost. 129/149 >2018-08-02 10:48:27,950 INFO: Cleanup : ansible-2.4.3.0-1.el7ae.noarch 130/149 >2018-08-02 10:48:27,957 INFO: Cleanup : openstack-tripleo-common-containers-8.6.1-23.el7ost.no 131/149 >2018-08-02 10:48:27,983 INFO: Cleanup : puppet-keystone-12.4.0-0.20180329034741.b6d2197.el7ost 132/149 >2018-08-02 10:48:27,993 INFO: Cleanup : openstack-selinux-0.8.14-12.el7ost.noarch 133/149 >2018-08-02 10:48:28,003 INFO: Cleanup : python-openvswitch-2.9.0-19.el7fdp.1.noarch 134/149 >2018-08-02 10:48:28,010 INFO: Cleanup : puppet-sysctl-0.0.11-0.20180215112742.65ffe83.el7ost.n 135/149 >2018-08-02 10:48:28,016 INFO: Cleanup : puppet-module-data-0.5.1-0.20180215133437.28dafce.el7o 136/149 >2018-08-02 10:48:28,024 INFO: Cleanup : puppet-n1k-vsm-0.0.2-0.20180220020853.91772fa.el7ost.n 137/149 >2018-08-02 10:48:28,033 INFO: Cleanup : puppet-ntp-4.2.0-0.20180220021230.93da3bd.el7ost.noarc 138/149 >2018-08-02 10:48:28,044 INFO: Cleanup : puppet-opendaylight-8.1.2-2.38977efgit.el7ost.noarch 139/149 >2018-08-02 10:48:28,056 INFO: Cleanup : puppet-pacemaker-0.7.2-0.20180423212248.fee47ee.el7ost 140/149 >2018-08-02 10:48:28,066 INFO: Cleanup : puppet-timezone-4.1.1-0.20180216002204.32aa9f5.el7ost. 141/149 >2018-08-02 10:48:28,075 INFO: Cleanup : openstack-tripleo-ui-8.3.1-3.el7ost.noarch 142/149 >2018-08-02 10:48:28,108 INFO: Cleanup : yum-utils-1.1.31-45.el7.noarch 143/149 >2018-08-02 10:48:28,148 INFO: Cleanup : diskimage-builder-2.13.0-1.el7ost.noarch 144/149 >2018-08-02 10:48:28,159 INFO: Erasing : python-amqp-2.1.4-2.el7ost.noarch 145/149 >2018-08-02 10:48:28,191 INFO: Cleanup : openstack-tripleo-puppet-elements-8.0.0-2.el7ost.noarc 146/149 >2018-08-02 10:48:28,349 INFO: Cleanup : openvswitch-2.9.0-19.el7fdp.1.x86_64 147/149 >2018-08-02 10:48:28,470 INFO: Cleanup : 2:microcode_ctl-2.1-29.2.el7_5.x86_64 148/149 >2018-08-02 10:49:29,268 INFO: Cleanup : lttng-ust-2.4.1-4.el7cp.x86_64 149/149 >2018-08-02 10:49:29,290 INFO: Verifying : python2-magnumclient-2.9.1-1.el7ost.noarch 1/149 >2018-08-02 10:49:29,294 INFO: Verifying : openstack-tripleo-puppet-elements-8.0.1-1.el7ost.noarc 2/149 >2018-08-02 10:49:29,299 INFO: Verifying : python2-wsme-0.9.3-1.el7ost.noarch 3/149 >2018-08-02 10:49:29,304 INFO: Verifying : openstack-mistral-common-6.0.3-1.el7ost.noarch 4/149 >2018-08-02 10:49:29,308 INFO: Verifying : puppet-sysctl-0.0.11-1.el7ost.noarch 5/149 >2018-08-02 10:49:29,313 INFO: Verifying : openstack-mistral-engine-6.0.3-1.el7ost.noarch 6/149 >2018-08-02 10:49:29,318 INFO: Verifying : python-tripleoclient-9.2.3-2.el7ost.noarch 7/149 >2018-08-02 10:49:29,324 INFO: Verifying : python-oslo-versionedobjects-lang-1.31.3-1.el7ost.noar 8/149 >2018-08-02 10:49:29,329 INFO: Verifying : 1:openstack-nova-conductor-17.0.5-2.d7864fbgit.el7ost. 9/149 >2018-08-02 10:49:29,335 INFO: Verifying : 1:openstack-neutron-common-12.0.3-2.el7ost.noarch 10/149 >2018-08-02 10:49:29,340 INFO: Verifying : 1:openstack-heat-api-10.0.1-2.el7ost.noarch 11/149 >2018-08-02 10:49:29,345 INFO: Verifying : 1:openstack-ironic-api-10.1.3-3.el7ost.noarch 12/149 >2018-08-02 10:49:29,350 INFO: Verifying : openvswitch-selinux-extra-policy-1.0-5.el7fdp.noarch 13/149 >2018-08-02 10:49:29,356 INFO: Verifying : puppet-pacemaker-0.7.2-0.20180423212250.el7ost.noarch 14/149 >2018-08-02 10:49:29,360 INFO: Verifying : puppet-opendaylight-8.2.2-2.9126c8dgit.el7ost.noarch 15/149 >2018-08-02 10:49:29,366 INFO: Verifying : python2-ironicclient-2.2.1-1.el7ost.noarch 16/149 >2018-08-02 10:49:29,370 INFO: Verifying : python2-amqp-2.3.2-3.el7ost.noarch 17/149 >2018-08-02 10:49:29,375 INFO: Verifying : python2-oslo-versionedobjects-1.31.3-1.el7ost.noarch 18/149 >2018-08-02 10:49:29,380 INFO: Verifying : puppet-neutron-12.4.1-1.3aa3109git.el7ost.noarch 19/149 >2018-08-02 10:49:29,385 INFO: Verifying : 1:openstack-neutron-openvswitch-12.0.3-2.el7ost.noarch 20/149 >2018-08-02 10:49:29,390 INFO: Verifying : python-openvswitch-2.9.0-54.el7fdp.noarch 21/149 >2018-08-02 10:49:29,395 INFO: Verifying : openstack-ironic-inspector-7.2.1-2.el7ost.noarch 22/149 >2018-08-02 10:49:29,400 INFO: Verifying : python-mistral-6.0.3-1.el7ost.noarch 23/149 >2018-08-02 10:49:29,406 INFO: Verifying : openstack-tripleo-validations-8.4.2-1.el7ost.noarch 24/149 >2018-08-02 10:49:29,411 INFO: Verifying : python-oslo-utils-lang-3.35.1-1.el7ost.noarch 25/149 >2018-08-02 10:49:29,417 INFO: Verifying : 1:openstack-heat-api-cfn-10.0.1-2.el7ost.noarch 26/149 >2018-08-02 10:49:29,422 INFO: Verifying : python2-oslo-utils-3.35.1-1.el7ost.noarch 27/149 >2018-08-02 10:49:29,427 INFO: Verifying : openstack-tripleo-common-8.6.3-5.el7ost.noarch 28/149 >2018-08-02 10:49:29,432 INFO: Verifying : 1:python-glance-16.0.1-3.el7ost.noarch 29/149 >2018-08-02 10:49:29,438 INFO: Verifying : 1:openstack-heat-common-10.0.1-2.el7ost.noarch 30/149 >2018-08-02 10:49:29,443 INFO: Verifying : 1:openstack-nova-api-17.0.5-2.d7864fbgit.el7ost.noarch 31/149 >2018-08-02 10:49:29,448 INFO: Verifying : python2-oslo-concurrency-3.25.1-1.el7ost.noarch 32/149 >2018-08-02 10:49:29,453 INFO: Verifying : 1:openstack-heat-engine-10.0.1-2.el7ost.noarch 33/149 >2018-08-02 10:49:29,458 INFO: Verifying : python2-oslo-db-4.33.1-1.el7ost.noarch 34/149 >2018-08-02 10:49:29,463 INFO: Verifying : lttng-ust-2.4.1-5.el7.x86_64 35/149 >2018-08-02 10:49:29,468 INFO: Verifying : 1:python-neutron-12.0.3-2.el7ost.noarch 36/149 >2018-08-02 10:49:29,473 INFO: Verifying : puppet-tripleo-8.3.4-3.el7ost.noarch 37/149 >2018-08-02 10:49:29,479 INFO: Verifying : openstack-mistral-executor-6.0.3-1.el7ost.noarch 38/149 >2018-08-02 10:49:29,488 INFO: Verifying : openstack-selinux-0.8.14-14.el7ost.noarch 39/149 >2018-08-02 10:49:29,494 INFO: Verifying : 1:openstack-nova-compute-17.0.5-2.d7864fbgit.el7ost.no 40/149 >2018-08-02 10:49:29,501 INFO: Verifying : diskimage-builder-2.16.0-1.el7ost.noarch 41/149 >2018-08-02 10:49:29,507 INFO: Verifying : openstack-mistral-api-6.0.3-1.el7ost.noarch 42/149 >2018-08-02 10:49:29,512 INFO: Verifying : openstack-tripleo-heat-templates-8.0.4-4.el7ost.noarch 43/149 >2018-08-02 10:49:29,518 INFO: Verifying : python-UcsSdk-0.8.2.5-1.el7ost.noarch 44/149 >2018-08-02 10:49:29,523 INFO: Verifying : puppet-manila-12.4.0-2.el7ost.noarch 45/149 >2018-08-02 10:49:29,528 INFO: Verifying : 1:openstack-neutron-ml2-12.0.3-2.el7ost.noarch 46/149 >2018-08-02 10:49:29,534 INFO: Verifying : 1:openstack-nova-placement-api-17.0.5-2.d7864fbgit.el7 47/149 >2018-08-02 10:49:29,539 INFO: Verifying : puppet-nova-12.4.0-6.el7ost.noarch 48/149 >2018-08-02 10:49:29,545 INFO: Verifying : 1:python-nova-17.0.5-2.d7864fbgit.el7ost.noarch 49/149 >2018-08-02 10:49:29,550 INFO: Verifying : puppet-cinder-12.4.1-0.20180628102250.641e036.el7ost.n 50/149 >2018-08-02 10:49:29,555 INFO: Verifying : puppet-swift-12.4.0-2.el7ost.noarch 51/149 >2018-08-02 10:49:29,560 INFO: Verifying : python-oslo-concurrency-lang-3.25.1-1.el7ost.noarch 52/149 >2018-08-02 10:49:29,565 INFO: Verifying : 2:microcode_ctl-2.1-29.10.el7_5.x86_64 53/149 >2018-08-02 10:49:29,570 INFO: Verifying : openvswitch-2.9.0-54.el7fdp.x86_64 54/149 >2018-08-02 10:49:29,575 INFO: Verifying : yum-utils-1.1.31-46.el7_5.noarch 55/149 >2018-08-02 10:49:29,579 INFO: Verifying : puppet-n1k-vsm-0.0.2-1.91772fagit.el7ost.noarch 56/149 >2018-08-02 10:49:29,584 INFO: Verifying : 1:openstack-nova-common-17.0.5-2.d7864fbgit.el7ost.noa 57/149 >2018-08-02 10:49:29,589 INFO: Verifying : openstack-tripleo-common-containers-8.6.3-5.el7ost.noa 58/149 >2018-08-02 10:49:29,594 INFO: Verifying : python2-neutron-tests-tempest-0.0.1-0.20180425142843.0 59/149 >2018-08-02 10:49:29,601 INFO: Verifying : ansible-2.4.6.0-1.el7ae.noarch 60/149 >2018-08-02 10:49:29,605 INFO: Verifying : 1:openstack-nova-scheduler-17.0.5-2.d7864fbgit.el7ost. 61/149 >2018-08-02 10:49:29,610 INFO: Verifying : puppet-ntp-4.2.0-2.el7ost.noarch 62/149 >2018-08-02 10:49:29,614 INFO: Verifying : 1:openstack-glance-16.0.1-3.el7ost.noarch 63/149 >2018-08-02 10:49:29,620 INFO: Verifying : openstack-tripleo-ui-8.3.2-1.el7ost.noarch 64/149 >2018-08-02 10:49:29,625 INFO: Verifying : 1:openstack-ironic-conductor-10.1.3-3.el7ost.noarch 65/149 >2018-08-02 10:49:29,629 INFO: Verifying : puppet-module-data-0.5.1-1.28dafcegit.el7ost.noarch 66/149 >2018-08-02 10:49:29,634 INFO: Verifying : python2-os-brick-2.3.2-1.el7ost.noarch 67/149 >2018-08-02 10:49:29,638 INFO: Verifying : 1:python2-novaclient-10.1.0-1.el7ost.noarch 68/149 >2018-08-02 10:49:29,643 INFO: Verifying : 1:openstack-ironic-common-10.1.3-3.el7ost.noarch 69/149 >2018-08-02 10:49:29,648 INFO: Verifying : puppet-timezone-4.1.1-1.el7ost.noarch 70/149 >2018-08-02 10:49:29,652 INFO: Verifying : puppet-keystone-12.4.0-2.el7ost.noarch 71/149 >2018-08-02 10:49:29,656 INFO: Verifying : python-oslo-db-lang-4.33.1-1.el7ost.noarch 72/149 >2018-08-02 10:49:29,661 INFO: Verifying : python2-tooz-1.60.1-1.el7ost.noarch 73/149 >2018-08-02 10:49:29,666 INFO: Verifying : 1:openstack-neutron-12.0.3-2.el7ost.noarch 74/149 >2018-08-02 10:49:29,670 INFO: Verifying : puppet-glance-12.5.0-3.el7ost.noarch 75/149 >2018-08-02 10:49:29,671 INFO: Verifying : openstack-tripleo-common-8.6.1-23.el7ost.noarch 76/149 >2018-08-02 10:49:29,673 INFO: Verifying : 1:openstack-nova-scheduler-17.0.3-0.20180420001141.el7 77/149 >2018-08-02 10:49:29,674 INFO: Verifying : 1:openstack-ironic-common-10.1.2-4.el7ost.noarch 78/149 >2018-08-02 10:49:29,674 INFO: Verifying : yum-utils-1.1.31-45.el7.noarch 79/149 >2018-08-02 10:49:29,677 INFO: Verifying : 1:openstack-nova-placement-api-17.0.3-0.20180420001141 80/149 >2018-08-02 10:49:29,677 INFO: Verifying : puppet-tripleo-8.3.2-8.el7ost.noarch 81/149 >2018-08-02 10:49:29,677 INFO: Verifying : puppet-timezone-4.1.1-0.20180216002204.32aa9f5.el7ost. 82/149 >2018-08-02 10:49:29,678 INFO: Verifying : openstack-tripleo-ui-8.3.1-3.el7ost.noarch 83/149 >2018-08-02 10:49:29,679 INFO: Verifying : python2-neutron-tests-tempest-0.0.1-0.20180419105837.f 84/149 >2018-08-02 10:49:29,680 INFO: Verifying : 1:openstack-nova-compute-17.0.3-0.20180420001141.el7os 85/149 >2018-08-02 10:49:29,681 INFO: Verifying : puppet-module-data-0.5.1-0.20180215133437.28dafce.el7o 86/149 >2018-08-02 10:49:29,682 INFO: Verifying : 1:openstack-neutron-12.0.2-0.20180421011364.0ec54fd.el 87/149 >2018-08-02 10:49:29,683 INFO: Verifying : python-oslo-db-lang-4.33.0-2.el7ost.noarch 88/149 >2018-08-02 10:49:29,684 INFO: Verifying : 1:python-neutron-12.0.2-0.20180421011364.0ec54fd.el7os 89/149 >2018-08-02 10:49:29,686 INFO: Verifying : openstack-tripleo-validations-8.4.1-5.el7ost.noarch 90/149 >2018-08-02 10:49:29,686 INFO: Verifying : python-oslo-concurrency-lang-3.25.0-1.el7ost.noarch 91/149 >2018-08-02 10:49:29,687 INFO: Verifying : puppet-ntp-4.2.0-0.20180220021230.93da3bd.el7ost.noarc 92/149 >2018-08-02 10:49:29,688 INFO: Verifying : python-mistral-6.0.2-1.el7ost.noarch 93/149 >2018-08-02 10:49:29,689 INFO: Verifying : 1:python-novaclient-9.1.1-1.el7ost.noarch 94/149 >2018-08-02 10:49:29,690 INFO: Verifying : openstack-tripleo-puppet-elements-8.0.0-2.el7ost.noarc 95/149 >2018-08-02 10:49:29,691 INFO: Verifying : puppet-nova-12.4.0-3.el7ost.noarch 96/149 >2018-08-02 10:49:29,692 INFO: Verifying : puppet-pacemaker-0.7.2-0.20180423212248.fee47ee.el7ost 97/149 >2018-08-02 10:49:29,693 INFO: Verifying : 1:openstack-glance-16.0.1-2.el7ost.noarch 98/149 >2018-08-02 10:49:29,695 INFO: Verifying : 1:openstack-heat-engine-10.0.1-0.20180411125640.el7ost 99/149 >2018-08-02 10:49:29,696 INFO: Verifying : 1:openstack-nova-common-17.0.3-0.20180420001141.el7ost 100/149 >2018-08-02 10:49:29,697 INFO: Verifying : python-UcsSdk-0.8.2.5-0.20180215132206.bf6b07d.el7ost. 101/149 >2018-08-02 10:49:29,698 INFO: Verifying : openstack-tripleo-common-containers-8.6.1-23.el7ost.no 102/149 >2018-08-02 10:49:29,699 INFO: Verifying : puppet-sysctl-0.0.11-0.20180215112742.65ffe83.el7ost.n 103/149 >2018-08-02 10:49:29,701 INFO: Verifying : 1:openstack-heat-api-10.0.1-0.20180411125640.el7ost.no 104/149 >2018-08-02 10:49:29,702 INFO: Verifying : 1:openstack-ironic-api-10.1.2-4.el7ost.noarch 105/149 >2018-08-02 10:49:29,703 INFO: Verifying : python2-tooz-1.60.0-1.el7ost.noarch 106/149 >2018-08-02 10:49:29,704 INFO: Verifying : 1:openstack-heat-common-10.0.1-0.20180411125640.el7ost 107/149 >2018-08-02 10:49:29,705 INFO: Verifying : puppet-cinder-12.4.1-0.20180329071637.4011a82.el7ost.n 108/149 >2018-08-02 10:49:29,706 INFO: Verifying : puppet-neutron-12.4.1-0.20180412211913.el7ost.noarch 109/149 >2018-08-02 10:49:29,707 INFO: Verifying : 1:python-nova-17.0.3-0.20180420001141.el7ost.noarch 110/149 >2018-08-02 10:49:29,708 INFO: Verifying : python2-magnumclient-2.9.0-1.el7ost.noarch 111/149 >2018-08-02 10:49:29,709 INFO: Verifying : openstack-mistral-common-6.0.2-1.el7ost.noarch 112/149 >2018-08-02 10:49:29,710 INFO: Verifying : 1:openstack-nova-conductor-17.0.3-0.20180420001141.el7 113/149 >2018-08-02 10:49:29,711 INFO: Verifying : lttng-ust-2.4.1-4.el7cp.x86_64 114/149 >2018-08-02 10:49:29,713 INFO: Verifying : 1:openstack-neutron-openvswitch-12.0.2-0.2018042101136 115/149 >2018-08-02 10:49:29,714 INFO: Verifying : openstack-mistral-engine-6.0.2-1.el7ost.noarch 116/149 >2018-08-02 10:49:29,715 INFO: Verifying : python-tripleoclient-9.2.1-13.el7ost.noarch 117/149 >2018-08-02 10:49:29,716 INFO: Verifying : python-oslo-versionedobjects-lang-1.31.2-1.el7ost.noar 118/149 >2018-08-02 10:49:29,718 INFO: Verifying : 1:openstack-heat-api-cfn-10.0.1-0.20180411125640.el7os 119/149 >2018-08-02 10:49:29,719 INFO: Verifying : openstack-tripleo-heat-templates-8.0.2-43.el7ost.noarc 120/149 >2018-08-02 10:49:29,720 INFO: Verifying : puppet-swift-12.4.0-0.20180329044944.1a67002.el7ost.no 121/149 >2018-08-02 10:49:29,721 INFO: Verifying : python-openvswitch-2.9.0-19.el7fdp.1.noarch 122/149 >2018-08-02 10:49:29,722 INFO: Verifying : python2-oslo-db-4.33.0-2.el7ost.noarch 123/149 >2018-08-02 10:49:29,723 INFO: Verifying : puppet-n1k-vsm-0.0.2-0.20180220020853.91772fa.el7ost.n 124/149 >2018-08-02 10:49:29,724 INFO: Verifying : puppet-glance-12.5.0-2.el7ost.noarch 125/149 >2018-08-02 10:49:29,725 INFO: Verifying : python2-oslo-versionedobjects-1.31.2-1.el7ost.noarch 126/149 >2018-08-02 10:49:29,726 INFO: Verifying : python-amqp-2.1.4-2.el7ost.noarch 127/149 >2018-08-02 10:49:29,727 INFO: Verifying : 2:microcode_ctl-2.1-29.2.el7_5.x86_64 128/149 >2018-08-02 10:49:29,728 INFO: Verifying : openvswitch-2.9.0-19.el7fdp.1.x86_64 129/149 >2018-08-02 10:49:29,729 INFO: Verifying : python2-os-brick-2.3.1-1.el7ost.noarch 130/149 >2018-08-02 10:49:29,730 INFO: Verifying : python2-ironicclient-2.2.0-1.el7ost.noarch 131/149 >2018-08-02 10:49:29,731 INFO: Verifying : python-oslo-utils-lang-3.35.0-1.el7ost.noarch 132/149 >2018-08-02 10:49:29,732 INFO: Verifying : ansible-2.4.3.0-1.el7ae.noarch 133/149 >2018-08-02 10:49:29,733 INFO: Verifying : 1:openstack-neutron-ml2-12.0.2-0.20180421011364.0ec54f 134/149 >2018-08-02 10:49:29,734 INFO: Verifying : puppet-manila-12.4.0-0.20180329035214.6c18418.el7ost.n 135/149 >2018-08-02 10:49:29,735 INFO: Verifying : 1:python-glance-16.0.1-2.el7ost.noarch 136/149 >2018-08-02 10:49:29,736 INFO: Verifying : python2-oslo-utils-3.35.0-1.el7ost.noarch 137/149 >2018-08-02 10:49:29,737 INFO: Verifying : puppet-keystone-12.4.0-0.20180329034741.b6d2197.el7ost 138/149 >2018-08-02 10:49:29,738 INFO: Verifying : python2-oslo-concurrency-3.25.0-1.el7ost.noarch 139/149 >2018-08-02 10:49:29,739 INFO: Verifying : 1:openstack-ironic-conductor-10.1.2-4.el7ost.noarch 140/149 >2018-08-02 10:49:29,740 INFO: Verifying : diskimage-builder-2.13.0-1.el7ost.noarch 141/149 >2018-08-02 10:49:29,741 INFO: Verifying : python2-wsme-0.9.2-0.20180219185555.9f84e4c.el7ost.noa 142/149 >2018-08-02 10:49:29,743 INFO: Verifying : openstack-mistral-executor-6.0.2-1.el7ost.noarch 143/149 >2018-08-02 10:49:29,744 INFO: Verifying : 1:openstack-nova-api-17.0.3-0.20180420001141.el7ost.no 144/149 >2018-08-02 10:49:29,745 INFO: Verifying : puppet-opendaylight-8.1.2-2.38977efgit.el7ost.noarch 145/149 >2018-08-02 10:49:29,746 INFO: Verifying : openstack-ironic-inspector-7.2.1-0.20180409163360.el7o 146/149 >2018-08-02 10:49:29,747 INFO: Verifying : openstack-selinux-0.8.14-12.el7ost.noarch 147/149 >2018-08-02 10:49:29,748 INFO: Verifying : 1:openstack-neutron-common-12.0.2-0.20180421011364.0ec 148/149 >2018-08-02 10:49:29,928 INFO: Verifying : openstack-mistral-api-6.0.2-1.el7ost.noarch 149/149 >2018-08-02 10:49:29,928 INFO: >2018-08-02 10:49:29,928 INFO: Installed: >2018-08-02 10:49:29,928 INFO: python2-amqp.noarch 0:2.3.2-3.el7ost >2018-08-02 10:49:29,929 INFO: python2-novaclient.noarch 1:10.1.0-1.el7ost >2018-08-02 10:49:29,929 INFO: >2018-08-02 10:49:29,929 INFO: Dependency Installed: >2018-08-02 10:49:29,929 INFO: openvswitch-selinux-extra-policy.noarch 0:1.0-5.el7fdp >2018-08-02 10:49:29,929 INFO: >2018-08-02 10:49:29,929 INFO: Updated: >2018-08-02 10:49:29,930 INFO: ansible.noarch 0:2.4.6.0-1.el7ae >2018-08-02 10:49:29,930 INFO: diskimage-builder.noarch 0:2.16.0-1.el7ost >2018-08-02 10:49:29,930 INFO: lttng-ust.x86_64 0:2.4.1-5.el7 >2018-08-02 10:49:29,930 INFO: microcode_ctl.x86_64 2:2.1-29.10.el7_5 >2018-08-02 10:49:29,931 INFO: openstack-glance.noarch 1:16.0.1-3.el7ost >2018-08-02 10:49:29,931 INFO: openstack-heat-api.noarch 1:10.0.1-2.el7ost >2018-08-02 10:49:29,931 INFO: openstack-heat-api-cfn.noarch 1:10.0.1-2.el7ost >2018-08-02 10:49:29,931 INFO: openstack-heat-common.noarch 1:10.0.1-2.el7ost >2018-08-02 10:49:29,932 INFO: openstack-heat-engine.noarch 1:10.0.1-2.el7ost >2018-08-02 10:49:29,932 INFO: openstack-ironic-api.noarch 1:10.1.3-3.el7ost >2018-08-02 10:49:29,932 INFO: openstack-ironic-common.noarch 1:10.1.3-3.el7ost >2018-08-02 10:49:29,932 INFO: openstack-ironic-conductor.noarch 1:10.1.3-3.el7ost >2018-08-02 10:49:29,933 INFO: openstack-ironic-inspector.noarch 0:7.2.1-2.el7ost >2018-08-02 10:49:29,933 INFO: openstack-mistral-api.noarch 0:6.0.3-1.el7ost >2018-08-02 10:49:29,933 INFO: openstack-mistral-common.noarch 0:6.0.3-1.el7ost >2018-08-02 10:49:29,933 INFO: openstack-mistral-engine.noarch 0:6.0.3-1.el7ost >2018-08-02 10:49:29,933 INFO: openstack-mistral-executor.noarch 0:6.0.3-1.el7ost >2018-08-02 10:49:29,934 INFO: openstack-neutron.noarch 1:12.0.3-2.el7ost >2018-08-02 10:49:29,934 INFO: openstack-neutron-common.noarch 1:12.0.3-2.el7ost >2018-08-02 10:49:29,934 INFO: openstack-neutron-ml2.noarch 1:12.0.3-2.el7ost >2018-08-02 10:49:29,934 INFO: openstack-neutron-openvswitch.noarch 1:12.0.3-2.el7ost >2018-08-02 10:49:29,935 INFO: openstack-nova-api.noarch 1:17.0.5-2.d7864fbgit.el7ost >2018-08-02 10:49:29,935 INFO: openstack-nova-common.noarch 1:17.0.5-2.d7864fbgit.el7ost >2018-08-02 10:49:29,935 INFO: openstack-nova-compute.noarch 1:17.0.5-2.d7864fbgit.el7ost >2018-08-02 10:49:29,935 INFO: openstack-nova-conductor.noarch 1:17.0.5-2.d7864fbgit.el7ost >2018-08-02 10:49:29,936 INFO: openstack-nova-placement-api.noarch 1:17.0.5-2.d7864fbgit.el7ost >2018-08-02 10:49:29,936 INFO: openstack-nova-scheduler.noarch 1:17.0.5-2.d7864fbgit.el7ost >2018-08-02 10:49:29,936 INFO: openstack-selinux.noarch 0:0.8.14-14.el7ost >2018-08-02 10:49:29,936 INFO: openstack-tripleo-common.noarch 0:8.6.3-5.el7ost >2018-08-02 10:49:29,936 INFO: openstack-tripleo-common-containers.noarch 0:8.6.3-5.el7ost >2018-08-02 10:49:29,937 INFO: openstack-tripleo-heat-templates.noarch 0:8.0.4-4.el7ost >2018-08-02 10:49:29,937 INFO: openstack-tripleo-puppet-elements.noarch 0:8.0.1-1.el7ost >2018-08-02 10:49:29,937 INFO: openstack-tripleo-ui.noarch 0:8.3.2-1.el7ost >2018-08-02 10:49:29,938 INFO: openstack-tripleo-validations.noarch 0:8.4.2-1.el7ost >2018-08-02 10:49:29,938 INFO: openvswitch.x86_64 0:2.9.0-54.el7fdp >2018-08-02 10:49:29,938 INFO: puppet-cinder.noarch 0:12.4.1-0.20180628102250.641e036.el7ost >2018-08-02 10:49:29,938 INFO: puppet-glance.noarch 0:12.5.0-3.el7ost >2018-08-02 10:49:29,939 INFO: puppet-keystone.noarch 0:12.4.0-2.el7ost >2018-08-02 10:49:29,939 INFO: puppet-manila.noarch 0:12.4.0-2.el7ost >2018-08-02 10:49:29,939 INFO: puppet-module-data.noarch 0:0.5.1-1.28dafcegit.el7ost >2018-08-02 10:49:29,939 INFO: puppet-n1k-vsm.noarch 0:0.0.2-1.91772fagit.el7ost >2018-08-02 10:49:29,939 INFO: puppet-neutron.noarch 0:12.4.1-1.3aa3109git.el7ost >2018-08-02 10:49:29,940 INFO: puppet-nova.noarch 0:12.4.0-6.el7ost >2018-08-02 10:49:29,940 INFO: puppet-ntp.noarch 0:4.2.0-2.el7ost >2018-08-02 10:49:29,940 INFO: puppet-opendaylight.noarch 0:8.2.2-2.9126c8dgit.el7ost >2018-08-02 10:49:29,940 INFO: puppet-pacemaker.noarch 0:0.7.2-0.20180423212250.el7ost >2018-08-02 10:49:29,941 INFO: puppet-swift.noarch 0:12.4.0-2.el7ost >2018-08-02 10:49:29,941 INFO: puppet-sysctl.noarch 0:0.0.11-1.el7ost >2018-08-02 10:49:29,941 INFO: puppet-timezone.noarch 0:4.1.1-1.el7ost >2018-08-02 10:49:29,941 INFO: puppet-tripleo.noarch 0:8.3.4-3.el7ost >2018-08-02 10:49:29,941 INFO: python-UcsSdk.noarch 0:0.8.2.5-1.el7ost >2018-08-02 10:49:29,942 INFO: python-glance.noarch 1:16.0.1-3.el7ost >2018-08-02 10:49:29,942 INFO: python-mistral.noarch 0:6.0.3-1.el7ost >2018-08-02 10:49:29,942 INFO: python-neutron.noarch 1:12.0.3-2.el7ost >2018-08-02 10:49:29,943 INFO: python-nova.noarch 1:17.0.5-2.d7864fbgit.el7ost >2018-08-02 10:49:29,943 INFO: python-openvswitch.noarch 0:2.9.0-54.el7fdp >2018-08-02 10:49:29,943 INFO: python-oslo-concurrency-lang.noarch 0:3.25.1-1.el7ost >2018-08-02 10:49:29,943 INFO: python-oslo-db-lang.noarch 0:4.33.1-1.el7ost >2018-08-02 10:49:29,944 INFO: python-oslo-utils-lang.noarch 0:3.35.1-1.el7ost >2018-08-02 10:49:29,944 INFO: python-oslo-versionedobjects-lang.noarch 0:1.31.3-1.el7ost >2018-08-02 10:49:29,944 INFO: python-tripleoclient.noarch 0:9.2.3-2.el7ost >2018-08-02 10:49:29,944 INFO: python2-ironicclient.noarch 0:2.2.1-1.el7ost >2018-08-02 10:49:29,945 INFO: python2-magnumclient.noarch 0:2.9.1-1.el7ost >2018-08-02 10:49:29,945 INFO: python2-neutron-tests-tempest.noarch 0:0.0.1-0.20180425142843.02a5e2b.el7ost >2018-08-02 10:49:29,945 INFO: python2-os-brick.noarch 0:2.3.2-1.el7ost >2018-08-02 10:49:29,945 INFO: python2-oslo-concurrency.noarch 0:3.25.1-1.el7ost >2018-08-02 10:49:29,946 INFO: python2-oslo-db.noarch 0:4.33.1-1.el7ost >2018-08-02 10:49:29,946 INFO: python2-oslo-utils.noarch 0:3.35.1-1.el7ost >2018-08-02 10:49:29,946 INFO: python2-oslo-versionedobjects.noarch 0:1.31.3-1.el7ost >2018-08-02 10:49:29,947 INFO: python2-tooz.noarch 0:1.60.1-1.el7ost >2018-08-02 10:49:29,947 INFO: python2-wsme.noarch 0:0.9.3-1.el7ost >2018-08-02 10:49:29,947 INFO: yum-utils.noarch 0:1.1.31-46.el7_5 >2018-08-02 10:49:29,947 INFO: >2018-08-02 10:49:29,947 INFO: Replaced: >2018-08-02 10:49:29,948 INFO: python-amqp.noarch 0:2.1.4-2.el7ost python-novaclient.noarch 1:9.1.1-1.el7ost >2018-08-02 10:49:29,948 INFO: >2018-08-02 10:49:29,948 INFO: Complete! >2018-08-02 10:49:29,990 INFO: Update completed successfully >2018-08-02 10:49:31,048 INFO: Logging to /home/stack/.instack/install-undercloud.log >2018-08-02 10:49:31,113 INFO: Checking for a FQDN hostname... >2018-08-02 10:49:31,212 INFO: Static hostname detected as undercloud-0.redhat.local >2018-08-02 10:49:31,249 INFO: Transient hostname detected as undercloud-0.redhat.local >2018-08-02 10:49:32,029 INFO: Running yum clean all >2018-08-02 10:49:32,300 INFO: Loaded plugins: search-disabled-repos >2018-08-02 10:49:32,340 INFO: Cleaning repos: rhelosp-13.0-image-build-override rhelosp-13.0-optools-puddle >2018-08-02 10:49:32,341 INFO: : rhelosp-13.0-puddle rhelosp-ceph-3.0-mon rhelosp-ceph-3.0-osd >2018-08-02 10:49:32,341 INFO: : rhelosp-ceph-3.0-tools rhelosp-rhel-7.5-extras >2018-08-02 10:49:32,341 INFO: : rhelosp-rhel-7.5-ha rhelosp-rhel-7.5-image-build-override >2018-08-02 10:49:32,341 INFO: : rhelosp-rhel-7.5-server rhos-release rhos-release-extras >2018-08-02 10:49:32,341 INFO: Cleaning up everything >2018-08-02 10:49:32,342 INFO: Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos >2018-08-02 10:49:32,555 INFO: yum-clean-all completed successfully >2018-08-02 10:49:32,556 INFO: Running yum update >2018-08-02 10:49:32,862 INFO: Loaded plugins: search-disabled-repos >2018-08-02 10:59:21,628 INFO: No packages marked for update >2018-08-02 10:59:21,676 INFO: yum-update completed successfully >2018-08-02 10:59:21,748 INFO: Running instack >2018-08-02 10:59:22,001 INFO: INFO: 2018-08-02 10:59:22,000 -- Starting run of instack >2018-08-02 10:59:22,017 INFO: INFO: 2018-08-02 10:59:22,017 -- Using json file: /usr/share/instack-undercloud/json-files/rhel-7-undercloud-packages.json >2018-08-02 10:59:22,018 INFO: INFO: 2018-08-02 10:59:22,018 -- Running Installation >2018-08-02 10:59:22,019 INFO: INFO: 2018-08-02 10:59:22,018 -- Initialized with elements path: /usr/share/tripleo-puppet-elements /usr/share/instack-undercloud /usr/share/tripleo-image-elements /usr/share/diskimage-builder/elements >2018-08-02 10:59:22,039 INFO: WARNING: 2018-08-02 10:59:22,039 -- expand_dependencies() deprecated, use get_elements >2018-08-02 10:59:22,070 INFO: INFO: 2018-08-02 10:59:22,070 -- List of all elements and dependencies: epel undercloud-install dib-python source-repositories install-types puppet-modules install-bin pip-manifest puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url pkg-map enable-packages-install puppet os-apply-config hiera package-installs >2018-08-02 10:59:22,071 INFO: INFO: 2018-08-02 10:59:22,070 -- Excluding element pip-and-virtualenv >2018-08-02 10:59:22,071 INFO: INFO: 2018-08-02 10:59:22,070 -- Excluding element epel >2018-08-02 10:59:22,071 INFO: INFO: 2018-08-02 10:59:22,070 -- Excluding element pip-manifest >2018-08-02 10:59:22,071 INFO: INFO: 2018-08-02 10:59:22,070 -- Excluding element package-installs >2018-08-02 10:59:22,072 INFO: INFO: 2018-08-02 10:59:22,070 -- Excluding element pkg-map >2018-08-02 10:59:22,072 INFO: INFO: 2018-08-02 10:59:22,070 -- Excluding element puppet >2018-08-02 10:59:22,072 INFO: INFO: 2018-08-02 10:59:22,070 -- Excluding element cache-url >2018-08-02 10:59:22,072 INFO: INFO: 2018-08-02 10:59:22,071 -- Excluding element dib-python >2018-08-02 10:59:22,073 INFO: INFO: 2018-08-02 10:59:22,071 -- Excluding element install-bin >2018-08-02 10:59:22,073 INFO: INFO: 2018-08-02 10:59:22,071 -- List of all elements and dependencies after excludes: undercloud-install source-repositories install-types puppet-modules puppet-stack-config os-refresh-config element-manifest manifests enable-packages-install os-apply-config hiera >2018-08-02 10:59:22,301 INFO: INFO: 2018-08-02 10:59:22,301 -- Running hook extra-data >2018-08-02 10:59:22,302 INFO: INFO: 2018-08-02 10:59:22,301 -- ############### Begin stdout/stderr logging ############### >2018-08-02 10:59:22,323 INFO: dib-run-parts Sourcing environment file /tmp/tmpEcJxVu/extra-data.d/../environment.d/00-dib-v2-env >2018-08-02 10:59:22,327 INFO: + source /tmp/tmpEcJxVu/extra-data.d/../environment.d/00-dib-v2-env >2018-08-02 10:59:22,328 INFO: ++ export 'IMAGE_ELEMENT=epel undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs' >2018-08-02 10:59:22,329 INFO: ++ IMAGE_ELEMENT='epel undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs' >2018-08-02 10:59:22,329 INFO: ++ export 'IMAGE_ELEMENT_YAML={cache-url: /usr/share/diskimage-builder/elements/cache-url, dib-python: /usr/share/diskimage-builder/elements/dib-python, >2018-08-02 10:59:22,330 INFO: element-manifest: /usr/share/diskimage-builder/elements/element-manifest, enable-packages-install: /usr/share/tripleo-image-elements/enable-packages-install, >2018-08-02 10:59:22,330 INFO: epel: /usr/share/diskimage-builder/elements/epel, hiera: /usr/share/tripleo-puppet-elements/hiera, >2018-08-02 10:59:22,330 INFO: install-bin: /usr/share/diskimage-builder/elements/install-bin, install-types: /usr/share/diskimage-builder/elements/install-types, >2018-08-02 10:59:22,331 INFO: manifests: /usr/share/diskimage-builder/elements/manifests, os-apply-config: /usr/share/tripleo-image-elements/os-apply-config, >2018-08-02 10:59:22,331 INFO: os-refresh-config: /usr/share/tripleo-image-elements/os-refresh-config, package-installs: /usr/share/diskimage-builder/elements/package-installs, >2018-08-02 10:59:22,331 INFO: pip-and-virtualenv: /usr/share/diskimage-builder/elements/pip-and-virtualenv, pip-manifest: /usr/share/tripleo-image-elements/pip-manifest, >2018-08-02 10:59:22,332 INFO: pkg-map: /usr/share/diskimage-builder/elements/pkg-map, puppet: /usr/share/tripleo-puppet-elements/puppet, >2018-08-02 10:59:22,332 INFO: puppet-modules: /usr/share/tripleo-puppet-elements/puppet-modules, puppet-stack-config: /usr/share/instack-undercloud/puppet-stack-config, >2018-08-02 10:59:22,332 INFO: source-repositories: /usr/share/diskimage-builder/elements/source-repositories, >2018-08-02 10:59:22,333 INFO: undercloud-install: /usr/share/instack-undercloud/undercloud-install} >2018-08-02 10:59:22,333 INFO: ' >2018-08-02 10:59:22,333 INFO: ++ IMAGE_ELEMENT_YAML='{cache-url: /usr/share/diskimage-builder/elements/cache-url, dib-python: /usr/share/diskimage-builder/elements/dib-python, >2018-08-02 10:59:22,334 INFO: element-manifest: /usr/share/diskimage-builder/elements/element-manifest, enable-packages-install: /usr/share/tripleo-image-elements/enable-packages-install, >2018-08-02 10:59:22,334 INFO: epel: /usr/share/diskimage-builder/elements/epel, hiera: /usr/share/tripleo-puppet-elements/hiera, >2018-08-02 10:59:22,334 INFO: install-bin: /usr/share/diskimage-builder/elements/install-bin, install-types: /usr/share/diskimage-builder/elements/install-types, >2018-08-02 10:59:22,335 INFO: manifests: /usr/share/diskimage-builder/elements/manifests, os-apply-config: /usr/share/tripleo-image-elements/os-apply-config, >2018-08-02 10:59:22,335 INFO: os-refresh-config: /usr/share/tripleo-image-elements/os-refresh-config, package-installs: /usr/share/diskimage-builder/elements/package-installs, >2018-08-02 10:59:22,335 INFO: pip-and-virtualenv: /usr/share/diskimage-builder/elements/pip-and-virtualenv, pip-manifest: /usr/share/tripleo-image-elements/pip-manifest, >2018-08-02 10:59:22,336 INFO: pkg-map: /usr/share/diskimage-builder/elements/pkg-map, puppet: /usr/share/tripleo-puppet-elements/puppet, >2018-08-02 10:59:22,336 INFO: puppet-modules: /usr/share/tripleo-puppet-elements/puppet-modules, puppet-stack-config: /usr/share/instack-undercloud/puppet-stack-config, >2018-08-02 10:59:22,336 INFO: source-repositories: /usr/share/diskimage-builder/elements/source-repositories, >2018-08-02 10:59:22,337 INFO: undercloud-install: /usr/share/instack-undercloud/undercloud-install} >2018-08-02 10:59:22,337 INFO: ' >2018-08-02 10:59:22,337 INFO: ++ export -f get_image_element_array >2018-08-02 10:59:22,338 INFO: + set +o xtrace >2018-08-02 10:59:22,338 INFO: dib-run-parts Sourcing environment file /tmp/tmpEcJxVu/extra-data.d/../environment.d/01-export-install-types.bash >2018-08-02 10:59:22,339 INFO: + source /tmp/tmpEcJxVu/extra-data.d/../environment.d/01-export-install-types.bash >2018-08-02 10:59:22,339 INFO: ++ export DIB_DEFAULT_INSTALLTYPE=package >2018-08-02 10:59:22,339 INFO: ++ DIB_DEFAULT_INSTALLTYPE=package >2018-08-02 10:59:22,339 INFO: + set +o xtrace >2018-08-02 10:59:22,340 INFO: dib-run-parts Sourcing environment file /tmp/tmpEcJxVu/extra-data.d/../environment.d/01-puppet-module-pins.sh >2018-08-02 10:59:22,340 INFO: + source /tmp/tmpEcJxVu/extra-data.d/../environment.d/01-puppet-module-pins.sh >2018-08-02 10:59:22,340 INFO: ++ export DIB_REPOREF_puppetlabs_ntp=4.2.x >2018-08-02 10:59:22,341 INFO: ++ DIB_REPOREF_puppetlabs_ntp=4.2.x >2018-08-02 10:59:22,341 INFO: + set +o xtrace >2018-08-02 10:59:22,341 INFO: dib-run-parts Sourcing environment file /tmp/tmpEcJxVu/extra-data.d/../environment.d/02-puppet-modules-install-types.sh >2018-08-02 10:59:22,341 INFO: + source /tmp/tmpEcJxVu/extra-data.d/../environment.d/02-puppet-modules-install-types.sh >2018-08-02 10:59:22,342 INFO: ++ DIB_DEFAULT_INSTALLTYPE=package >2018-08-02 10:59:22,342 INFO: ++ DIB_INSTALLTYPE_puppet_modules=package >2018-08-02 10:59:22,342 INFO: ++ '[' package = source ']' >2018-08-02 10:59:22,342 INFO: + set +o xtrace >2018-08-02 10:59:22,342 INFO: dib-run-parts Sourcing environment file /tmp/tmpEcJxVu/extra-data.d/../environment.d/10-os-apply-config-venv-dir.bash >2018-08-02 10:59:22,345 INFO: + source /tmp/tmpEcJxVu/extra-data.d/../environment.d/10-os-apply-config-venv-dir.bash >2018-08-02 10:59:22,346 INFO: ++ '[' -z '' ']' >2018-08-02 10:59:22,346 INFO: ++ export OS_APPLY_CONFIG_VENV_DIR=/opt/stack/venvs/os-apply-config >2018-08-02 10:59:22,346 INFO: ++ OS_APPLY_CONFIG_VENV_DIR=/opt/stack/venvs/os-apply-config >2018-08-02 10:59:22,346 INFO: + set +o xtrace >2018-08-02 10:59:22,347 INFO: dib-run-parts Sourcing environment file /tmp/tmpEcJxVu/extra-data.d/../environment.d/14-manifests >2018-08-02 10:59:22,350 INFO: + source /tmp/tmpEcJxVu/extra-data.d/../environment.d/14-manifests >2018-08-02 10:59:22,350 INFO: ++ export DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests >2018-08-02 10:59:22,350 INFO: ++ DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests >2018-08-02 10:59:22,351 INFO: ++ export DIB_MANIFEST_SAVE_DIR=instack.d/ >2018-08-02 10:59:22,351 INFO: ++ DIB_MANIFEST_SAVE_DIR=instack.d/ >2018-08-02 10:59:22,351 INFO: + set +o xtrace >2018-08-02 10:59:22,351 INFO: dib-run-parts Running /tmp/tmpEcJxVu/extra-data.d/10-install-git >2018-08-02 10:59:22,357 INFO: + yum -y install git >2018-08-02 10:59:22,621 INFO: Loaded plugins: search-disabled-repos >2018-08-02 10:59:23,096 INFO: Package git-1.8.3.1-14.el7_5.x86_64 already installed and latest version >2018-08-02 10:59:23,096 INFO: Nothing to do >2018-08-02 10:59:23,128 INFO: dib-run-parts 10-install-git completed >2018-08-02 10:59:23,129 INFO: dib-run-parts Running /tmp/tmpEcJxVu/extra-data.d/20-manifest-dir >2018-08-02 10:59:23,135 INFO: + set -eu >2018-08-02 10:59:23,135 INFO: + set -o pipefail >2018-08-02 10:59:23,135 INFO: + sudo mkdir -p /tmp/instack.KEu3J2/mnt//etc/dib-manifests >2018-08-02 10:59:23,161 INFO: dib-run-parts 20-manifest-dir completed >2018-08-02 10:59:23,161 INFO: dib-run-parts Running /tmp/tmpEcJxVu/extra-data.d/75-inject-element-manifest >2018-08-02 10:59:23,167 INFO: + set -eu >2018-08-02 10:59:23,167 INFO: + set -o pipefail >2018-08-02 10:59:23,167 INFO: + DIB_ELEMENT_MANIFEST_PATH=/etc/dib-manifests/dib-element-manifest >2018-08-02 10:59:23,167 INFO: ++ dirname /etc/dib-manifests/dib-element-manifest >2018-08-02 10:59:23,169 INFO: + sudo mkdir -p /tmp/instack.KEu3J2/mnt//etc/dib-manifests >2018-08-02 10:59:23,190 INFO: + sudo /bin/bash -c 'echo epel undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs | tr '\'' '\'' '\''\n'\'' > /tmp/instack.KEu3J2/mnt//etc/dib-manifests/dib-element-manifest' >2018-08-02 10:59:23,216 INFO: dib-run-parts 75-inject-element-manifest completed >2018-08-02 10:59:23,216 INFO: dib-run-parts Running /tmp/tmpEcJxVu/extra-data.d/98-source-repositories >2018-08-02 10:59:23,242 INFO: Getting /root/.cache/image-create/source-repositories/repositories_flock: Thu Aug 2 10:59:23 EDT 2018 for /tmp/tmpEcJxVu/source-repository-puppet-modules >2018-08-02 10:59:23,250 INFO: (0001 / 0081) >2018-08-02 10:59:23,259 INFO: puppetlabs-apache install type not set to source >2018-08-02 10:59:23,261 INFO: (0002 / 0081) >2018-08-02 10:59:23,269 INFO: puppet-aodh install type not set to source >2018-08-02 10:59:23,270 INFO: (0003 / 0081) >2018-08-02 10:59:23,278 INFO: puppet-auditd install type not set to source >2018-08-02 10:59:23,279 INFO: (0004 / 0081) >2018-08-02 10:59:23,287 INFO: puppet-barbican install type not set to source >2018-08-02 10:59:23,289 INFO: (0005 / 0081) >2018-08-02 10:59:23,296 INFO: puppet-cassandra install type not set to source >2018-08-02 10:59:23,298 INFO: (0006 / 0081) >2018-08-02 10:59:23,305 INFO: puppet-ceph install type not set to source >2018-08-02 10:59:23,307 INFO: (0007 / 0081) >2018-08-02 10:59:23,314 INFO: puppet-ceilometer install type not set to source >2018-08-02 10:59:23,316 INFO: (0008 / 0081) >2018-08-02 10:59:23,323 INFO: puppet-congress install type not set to source >2018-08-02 10:59:23,325 INFO: (0009 / 0081) >2018-08-02 10:59:23,332 INFO: puppet-gnocchi install type not set to source >2018-08-02 10:59:23,334 INFO: (0010 / 0081) >2018-08-02 10:59:23,341 INFO: puppet-certmonger install type not set to source >2018-08-02 10:59:23,343 INFO: (0011 / 0081) >2018-08-02 10:59:23,350 INFO: puppet-cinder install type not set to source >2018-08-02 10:59:23,352 INFO: (0012 / 0081) >2018-08-02 10:59:23,359 INFO: puppet-common install type not set to source >2018-08-02 10:59:23,361 INFO: (0013 / 0081) >2018-08-02 10:59:23,368 INFO: puppet-contrail install type not set to source >2018-08-02 10:59:23,370 INFO: (0014 / 0081) >2018-08-02 10:59:23,378 INFO: puppetlabs-concat install type not set to source >2018-08-02 10:59:23,379 INFO: (0015 / 0081) >2018-08-02 10:59:23,387 INFO: puppetlabs-firewall install type not set to source >2018-08-02 10:59:23,389 INFO: (0016 / 0081) >2018-08-02 10:59:23,396 INFO: puppet-glance install type not set to source >2018-08-02 10:59:23,398 INFO: (0017 / 0081) >2018-08-02 10:59:23,406 INFO: puppet-gluster install type not set to source >2018-08-02 10:59:23,407 INFO: (0018 / 0081) >2018-08-02 10:59:23,415 INFO: puppetlabs-haproxy install type not set to source >2018-08-02 10:59:23,416 INFO: (0019 / 0081) >2018-08-02 10:59:23,423 INFO: puppet-heat install type not set to source >2018-08-02 10:59:23,425 INFO: (0020 / 0081) >2018-08-02 10:59:23,432 INFO: puppet-healthcheck install type not set to source >2018-08-02 10:59:23,434 INFO: (0021 / 0081) >2018-08-02 10:59:23,441 INFO: puppet-horizon install type not set to source >2018-08-02 10:59:23,443 INFO: (0022 / 0081) >2018-08-02 10:59:23,450 INFO: puppetlabs-inifile install type not set to source >2018-08-02 10:59:23,452 INFO: (0023 / 0081) >2018-08-02 10:59:23,460 INFO: puppet-kafka install type not set to source >2018-08-02 10:59:23,462 INFO: (0024 / 0081) >2018-08-02 10:59:23,470 INFO: puppet-keystone install type not set to source >2018-08-02 10:59:23,471 INFO: (0025 / 0081) >2018-08-02 10:59:23,479 INFO: puppet-manila install type not set to source >2018-08-02 10:59:23,481 INFO: (0026 / 0081) >2018-08-02 10:59:23,488 INFO: puppet-memcached install type not set to source >2018-08-02 10:59:23,490 INFO: (0027 / 0081) >2018-08-02 10:59:23,497 INFO: puppet-mistral install type not set to source >2018-08-02 10:59:23,499 INFO: (0028 / 0081) >2018-08-02 10:59:23,506 INFO: puppetlabs-mongodb install type not set to source >2018-08-02 10:59:23,508 INFO: (0029 / 0081) >2018-08-02 10:59:23,516 INFO: puppetlabs-mysql install type not set to source >2018-08-02 10:59:23,517 INFO: (0030 / 0081) >2018-08-02 10:59:23,524 INFO: puppet-neutron install type not set to source >2018-08-02 10:59:23,526 INFO: (0031 / 0081) >2018-08-02 10:59:23,534 INFO: puppet-nova install type not set to source >2018-08-02 10:59:23,535 INFO: (0032 / 0081) >2018-08-02 10:59:23,543 INFO: puppet-octavia install type not set to source >2018-08-02 10:59:23,545 INFO: (0033 / 0081) >2018-08-02 10:59:23,552 INFO: puppet-oslo install type not set to source >2018-08-02 10:59:23,555 INFO: (0034 / 0081) >2018-08-02 10:59:23,562 INFO: puppet-nssdb install type not set to source >2018-08-02 10:59:23,564 INFO: (0035 / 0081) >2018-08-02 10:59:23,571 INFO: puppet-opendaylight install type not set to source >2018-08-02 10:59:23,573 INFO: (0036 / 0081) >2018-08-02 10:59:23,580 INFO: puppet-ovn install type not set to source >2018-08-02 10:59:23,582 INFO: (0037 / 0081) >2018-08-02 10:59:23,589 INFO: puppet-panko install type not set to source >2018-08-02 10:59:23,591 INFO: (0038 / 0081) >2018-08-02 10:59:23,598 INFO: puppet-puppet install type not set to source >2018-08-02 10:59:23,599 INFO: (0039 / 0081) >2018-08-02 10:59:23,606 INFO: puppetlabs-rabbitmq install type not set to source >2018-08-02 10:59:23,608 INFO: (0040 / 0081) >2018-08-02 10:59:23,615 INFO: puppet-redis install type not set to source >2018-08-02 10:59:23,617 INFO: (0041 / 0081) >2018-08-02 10:59:23,624 INFO: puppetlabs-rsync install type not set to source >2018-08-02 10:59:23,626 INFO: (0042 / 0081) >2018-08-02 10:59:23,633 INFO: puppet-sahara install type not set to source >2018-08-02 10:59:23,635 INFO: (0043 / 0081) >2018-08-02 10:59:23,642 INFO: sensu-puppet install type not set to source >2018-08-02 10:59:23,644 INFO: (0044 / 0081) >2018-08-02 10:59:23,651 INFO: puppet-tacker install type not set to source >2018-08-02 10:59:23,653 INFO: (0045 / 0081) >2018-08-02 10:59:23,660 INFO: puppet-trove install type not set to source >2018-08-02 10:59:23,662 INFO: (0046 / 0081) >2018-08-02 10:59:23,669 INFO: puppet-ssh install type not set to source >2018-08-02 10:59:23,670 INFO: (0047 / 0081) >2018-08-02 10:59:23,677 INFO: puppet-staging install type not set to source >2018-08-02 10:59:23,679 INFO: (0048 / 0081) >2018-08-02 10:59:23,686 INFO: puppetlabs-stdlib install type not set to source >2018-08-02 10:59:23,688 INFO: (0049 / 0081) >2018-08-02 10:59:23,695 INFO: puppet-swift install type not set to source >2018-08-02 10:59:23,697 INFO: (0050 / 0081) >2018-08-02 10:59:23,704 INFO: puppetlabs-sysctl install type not set to source >2018-08-02 10:59:23,706 INFO: (0051 / 0081) >2018-08-02 10:59:23,713 INFO: puppet-timezone install type not set to source >2018-08-02 10:59:23,715 INFO: (0052 / 0081) >2018-08-02 10:59:23,722 INFO: puppet-uchiwa install type not set to source >2018-08-02 10:59:23,724 INFO: (0053 / 0081) >2018-08-02 10:59:23,730 INFO: puppetlabs-vcsrepo install type not set to source >2018-08-02 10:59:23,732 INFO: (0054 / 0081) >2018-08-02 10:59:23,740 INFO: puppet-vlan install type not set to source >2018-08-02 10:59:23,741 INFO: (0055 / 0081) >2018-08-02 10:59:23,749 INFO: puppet-vswitch install type not set to source >2018-08-02 10:59:23,751 INFO: (0056 / 0081) >2018-08-02 10:59:23,758 INFO: puppetlabs-xinetd install type not set to source >2018-08-02 10:59:23,759 INFO: (0057 / 0081) >2018-08-02 10:59:23,766 INFO: puppet-zookeeper install type not set to source >2018-08-02 10:59:23,768 INFO: (0058 / 0081) >2018-08-02 10:59:23,775 INFO: puppet-openstacklib install type not set to source >2018-08-02 10:59:23,777 INFO: (0059 / 0081) >2018-08-02 10:59:23,784 INFO: puppet-module-keepalived install type not set to source >2018-08-02 10:59:23,786 INFO: (0060 / 0081) >2018-08-02 10:59:23,793 INFO: puppetlabs-ntp install type not set to source >2018-08-02 10:59:23,795 INFO: (0061 / 0081) >2018-08-02 10:59:23,802 INFO: puppet-snmp install type not set to source >2018-08-02 10:59:23,804 INFO: (0062 / 0081) >2018-08-02 10:59:23,811 INFO: puppet-tripleo install type not set to source >2018-08-02 10:59:23,812 INFO: (0063 / 0081) >2018-08-02 10:59:23,819 INFO: puppet-ironic install type not set to source >2018-08-02 10:59:23,821 INFO: (0064 / 0081) >2018-08-02 10:59:23,828 INFO: puppet-ipaclient install type not set to source >2018-08-02 10:59:23,829 INFO: (0065 / 0081) >2018-08-02 10:59:23,836 INFO: puppetlabs-corosync install type not set to source >2018-08-02 10:59:23,838 INFO: (0066 / 0081) >2018-08-02 10:59:23,845 INFO: puppet-pacemaker install type not set to source >2018-08-02 10:59:23,846 INFO: (0067 / 0081) >2018-08-02 10:59:23,853 INFO: puppet_aviator install type not set to source >2018-08-02 10:59:23,855 INFO: (0068 / 0081) >2018-08-02 10:59:23,862 INFO: puppet-openstack_extras install type not set to source >2018-08-02 10:59:23,863 INFO: (0069 / 0081) >2018-08-02 10:59:23,870 INFO: konstantin-fluentd install type not set to source >2018-08-02 10:59:23,871 INFO: (0070 / 0081) >2018-08-02 10:59:23,878 INFO: puppet-elasticsearch install type not set to source >2018-08-02 10:59:23,880 INFO: (0071 / 0081) >2018-08-02 10:59:23,887 INFO: puppet-kibana3 install type not set to source >2018-08-02 10:59:23,889 INFO: (0072 / 0081) >2018-08-02 10:59:23,896 INFO: puppetlabs-git install type not set to source >2018-08-02 10:59:23,898 INFO: (0073 / 0081) >2018-08-02 10:59:23,904 INFO: puppet-datacat install type not set to source >2018-08-02 10:59:23,906 INFO: (0074 / 0081) >2018-08-02 10:59:23,913 INFO: puppet-kmod install type not set to source >2018-08-02 10:59:23,914 INFO: (0075 / 0081) >2018-08-02 10:59:23,922 INFO: puppet-zaqar install type not set to source >2018-08-02 10:59:23,923 INFO: (0076 / 0081) >2018-08-02 10:59:23,930 INFO: puppet-ec2api install type not set to source >2018-08-02 10:59:23,932 INFO: (0077 / 0081) >2018-08-02 10:59:23,941 INFO: puppet-qdr install type not set to source >2018-08-02 10:59:23,943 INFO: (0078 / 0081) >2018-08-02 10:59:23,950 INFO: puppet-systemd install type not set to source >2018-08-02 10:59:23,952 INFO: (0079 / 0081) >2018-08-02 10:59:23,959 INFO: puppet-etcd install type not set to source >2018-08-02 10:59:23,960 INFO: (0080 / 0081) >2018-08-02 10:59:23,967 INFO: puppet-veritas_hyperscale install type not set to source >2018-08-02 10:59:23,969 INFO: (0081 / 0081) >2018-08-02 10:59:23,975 INFO: puppet-ptp install type not set to source >2018-08-02 10:59:23,978 INFO: dib-run-parts 98-source-repositories completed >2018-08-02 10:59:23,979 INFO: dib-run-parts Running /tmp/tmpEcJxVu/extra-data.d/99-enable-install-types >2018-08-02 10:59:23,985 INFO: + set -eu >2018-08-02 10:59:23,986 INFO: + set -o pipefail >2018-08-02 10:59:23,986 INFO: + declare -a SPECIFIED_ELEMS >2018-08-02 10:59:23,986 INFO: + SPECIFIED_ELEMS[0]= >2018-08-02 10:59:23,986 INFO: + PREFIX=DIB_INSTALLTYPE_ >2018-08-02 10:59:23,987 INFO: ++ env >2018-08-02 10:59:23,987 INFO: ++ grep '^DIB_INSTALLTYPE_' >2018-08-02 10:59:23,987 INFO: ++ cut -d= -f1 >2018-08-02 10:59:23,989 INFO: ++ echo '' >2018-08-02 10:59:23,989 INFO: + INSTALL_TYPE_VARS= >2018-08-02 10:59:23,990 INFO: ++ find /tmp/tmpEcJxVu/install.d -maxdepth 1 -name '*-package-install' -type d >2018-08-02 10:59:23,993 INFO: + default_install_type_dirs=/tmp/tmpEcJxVu/install.d/puppet-modules-package-install >2018-08-02 10:59:23,993 INFO: + for _install_dir in '$default_install_type_dirs' >2018-08-02 10:59:23,993 INFO: + SUFFIX=-package-install >2018-08-02 10:59:23,994 INFO: ++ basename /tmp/tmpEcJxVu/install.d/puppet-modules-package-install >2018-08-02 10:59:23,995 INFO: + _install_dir=puppet-modules-package-install >2018-08-02 10:59:23,995 INFO: + INSTALLDIRPREFIX=puppet-modules >2018-08-02 10:59:23,995 INFO: + found=0 >2018-08-02 10:59:23,996 INFO: + '[' 0 = 0 ']' >2018-08-02 10:59:23,996 INFO: + pushd /tmp/tmpEcJxVu/install.d >2018-08-02 10:59:23,996 INFO: /tmp/tmpEcJxVu/install.d /home/stack >2018-08-02 10:59:23,996 INFO: + ln -sf puppet-modules-package-install/75-puppet-modules-package . >2018-08-02 10:59:23,997 INFO: + popd >2018-08-02 10:59:23,998 INFO: /home/stack >2018-08-02 10:59:24,000 INFO: dib-run-parts 99-enable-install-types completed >2018-08-02 10:59:24,000 INFO: dib-run-parts ----------------------- PROFILING ----------------------- >2018-08-02 10:59:24,001 INFO: dib-run-parts >2018-08-02 10:59:24,003 INFO: dib-run-parts Target: extra-data.d >2018-08-02 10:59:24,003 INFO: dib-run-parts >2018-08-02 10:59:24,003 INFO: dib-run-parts Script Seconds >2018-08-02 10:59:24,004 INFO: dib-run-parts --------------------------------------- ---------- >2018-08-02 10:59:24,004 INFO: dib-run-parts >2018-08-02 10:59:24,017 INFO: dib-run-parts 10-install-git 0.775 >2018-08-02 10:59:24,026 INFO: dib-run-parts 20-manifest-dir 0.030 >2018-08-02 10:59:24,035 INFO: dib-run-parts 75-inject-element-manifest 0.052 >2018-08-02 10:59:24,044 INFO: dib-run-parts 98-source-repositories 0.760 >2018-08-02 10:59:24,053 INFO: dib-run-parts 99-enable-install-types 0.019 >2018-08-02 10:59:24,056 INFO: dib-run-parts >2018-08-02 10:59:24,056 INFO: dib-run-parts --------------------- END PROFILING --------------------- >2018-08-02 10:59:24,059 INFO: INFO: 2018-08-02 10:59:24,057 -- ############### End stdout/stderr logging ############### >2018-08-02 10:59:24,059 INFO: INFO: 2018-08-02 10:59:24,057 -- Running hook pre-install >2018-08-02 10:59:24,060 INFO: INFO: 2018-08-02 10:59:24,057 -- Skipping hook pre-install, the hook directory doesn't exist at /tmp/tmpEcJxVu/pre-install.d >2018-08-02 10:59:24,060 INFO: INFO: 2018-08-02 10:59:24,057 -- Running hook install >2018-08-02 10:59:24,060 INFO: INFO: 2018-08-02 10:59:24,058 -- ############### Begin stdout/stderr logging ############### >2018-08-02 10:59:24,079 INFO: dib-run-parts Sourcing environment file /tmp/tmpEcJxVu/install.d/../environment.d/00-dib-v2-env >2018-08-02 10:59:24,083 INFO: + source /tmp/tmpEcJxVu/install.d/../environment.d/00-dib-v2-env >2018-08-02 10:59:24,084 INFO: ++ export 'IMAGE_ELEMENT=epel undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs' >2018-08-02 10:59:24,084 INFO: ++ IMAGE_ELEMENT='epel undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs' >2018-08-02 10:59:24,085 INFO: ++ export 'IMAGE_ELEMENT_YAML={cache-url: /usr/share/diskimage-builder/elements/cache-url, dib-python: /usr/share/diskimage-builder/elements/dib-python, >2018-08-02 10:59:24,085 INFO: element-manifest: /usr/share/diskimage-builder/elements/element-manifest, enable-packages-install: /usr/share/tripleo-image-elements/enable-packages-install, >2018-08-02 10:59:24,085 INFO: epel: /usr/share/diskimage-builder/elements/epel, hiera: /usr/share/tripleo-puppet-elements/hiera, >2018-08-02 10:59:24,086 INFO: install-bin: /usr/share/diskimage-builder/elements/install-bin, install-types: /usr/share/diskimage-builder/elements/install-types, >2018-08-02 10:59:24,086 INFO: manifests: /usr/share/diskimage-builder/elements/manifests, os-apply-config: /usr/share/tripleo-image-elements/os-apply-config, >2018-08-02 10:59:24,086 INFO: os-refresh-config: /usr/share/tripleo-image-elements/os-refresh-config, package-installs: /usr/share/diskimage-builder/elements/package-installs, >2018-08-02 10:59:24,087 INFO: pip-and-virtualenv: /usr/share/diskimage-builder/elements/pip-and-virtualenv, pip-manifest: /usr/share/tripleo-image-elements/pip-manifest, >2018-08-02 10:59:24,087 INFO: pkg-map: /usr/share/diskimage-builder/elements/pkg-map, puppet: /usr/share/tripleo-puppet-elements/puppet, >2018-08-02 10:59:24,087 INFO: puppet-modules: /usr/share/tripleo-puppet-elements/puppet-modules, puppet-stack-config: /usr/share/instack-undercloud/puppet-stack-config, >2018-08-02 10:59:24,088 INFO: source-repositories: /usr/share/diskimage-builder/elements/source-repositories, >2018-08-02 10:59:24,088 INFO: undercloud-install: /usr/share/instack-undercloud/undercloud-install} >2018-08-02 10:59:24,088 INFO: ' >2018-08-02 10:59:24,088 INFO: ++ IMAGE_ELEMENT_YAML='{cache-url: /usr/share/diskimage-builder/elements/cache-url, dib-python: /usr/share/diskimage-builder/elements/dib-python, >2018-08-02 10:59:24,089 INFO: element-manifest: /usr/share/diskimage-builder/elements/element-manifest, enable-packages-install: /usr/share/tripleo-image-elements/enable-packages-install, >2018-08-02 10:59:24,089 INFO: epel: /usr/share/diskimage-builder/elements/epel, hiera: /usr/share/tripleo-puppet-elements/hiera, >2018-08-02 10:59:24,089 INFO: install-bin: /usr/share/diskimage-builder/elements/install-bin, install-types: /usr/share/diskimage-builder/elements/install-types, >2018-08-02 10:59:24,090 INFO: manifests: /usr/share/diskimage-builder/elements/manifests, os-apply-config: /usr/share/tripleo-image-elements/os-apply-config, >2018-08-02 10:59:24,090 INFO: os-refresh-config: /usr/share/tripleo-image-elements/os-refresh-config, package-installs: /usr/share/diskimage-builder/elements/package-installs, >2018-08-02 10:59:24,091 INFO: pip-and-virtualenv: /usr/share/diskimage-builder/elements/pip-and-virtualenv, pip-manifest: /usr/share/tripleo-image-elements/pip-manifest, >2018-08-02 10:59:24,091 INFO: pkg-map: /usr/share/diskimage-builder/elements/pkg-map, puppet: /usr/share/tripleo-puppet-elements/puppet, >2018-08-02 10:59:24,091 INFO: puppet-modules: /usr/share/tripleo-puppet-elements/puppet-modules, puppet-stack-config: /usr/share/instack-undercloud/puppet-stack-config, >2018-08-02 10:59:24,092 INFO: source-repositories: /usr/share/diskimage-builder/elements/source-repositories, >2018-08-02 10:59:24,092 INFO: undercloud-install: /usr/share/instack-undercloud/undercloud-install} >2018-08-02 10:59:24,092 INFO: ' >2018-08-02 10:59:24,092 INFO: ++ export -f get_image_element_array >2018-08-02 10:59:24,092 INFO: + set +o xtrace >2018-08-02 10:59:24,093 INFO: dib-run-parts Sourcing environment file /tmp/tmpEcJxVu/install.d/../environment.d/01-export-install-types.bash >2018-08-02 10:59:24,093 INFO: + source /tmp/tmpEcJxVu/install.d/../environment.d/01-export-install-types.bash >2018-08-02 10:59:24,093 INFO: ++ export DIB_DEFAULT_INSTALLTYPE=package >2018-08-02 10:59:24,093 INFO: ++ DIB_DEFAULT_INSTALLTYPE=package >2018-08-02 10:59:24,094 INFO: + set +o xtrace >2018-08-02 10:59:24,094 INFO: dib-run-parts Sourcing environment file /tmp/tmpEcJxVu/install.d/../environment.d/01-puppet-module-pins.sh >2018-08-02 10:59:24,094 INFO: + source /tmp/tmpEcJxVu/install.d/../environment.d/01-puppet-module-pins.sh >2018-08-02 10:59:24,094 INFO: ++ export DIB_REPOREF_puppetlabs_ntp=4.2.x >2018-08-02 10:59:24,094 INFO: ++ DIB_REPOREF_puppetlabs_ntp=4.2.x >2018-08-02 10:59:24,095 INFO: + set +o xtrace >2018-08-02 10:59:24,095 INFO: dib-run-parts Sourcing environment file /tmp/tmpEcJxVu/install.d/../environment.d/02-puppet-modules-install-types.sh >2018-08-02 10:59:24,097 INFO: + source /tmp/tmpEcJxVu/install.d/../environment.d/02-puppet-modules-install-types.sh >2018-08-02 10:59:24,097 INFO: ++ DIB_DEFAULT_INSTALLTYPE=package >2018-08-02 10:59:24,097 INFO: ++ DIB_INSTALLTYPE_puppet_modules=package >2018-08-02 10:59:24,098 INFO: ++ '[' package = source ']' >2018-08-02 10:59:24,098 INFO: + set +o xtrace >2018-08-02 10:59:24,098 INFO: dib-run-parts Sourcing environment file /tmp/tmpEcJxVu/install.d/../environment.d/10-os-apply-config-venv-dir.bash >2018-08-02 10:59:24,101 INFO: + source /tmp/tmpEcJxVu/install.d/../environment.d/10-os-apply-config-venv-dir.bash >2018-08-02 10:59:24,101 INFO: ++ '[' -z '' ']' >2018-08-02 10:59:24,101 INFO: ++ export OS_APPLY_CONFIG_VENV_DIR=/opt/stack/venvs/os-apply-config >2018-08-02 10:59:24,102 INFO: ++ OS_APPLY_CONFIG_VENV_DIR=/opt/stack/venvs/os-apply-config >2018-08-02 10:59:24,102 INFO: + set +o xtrace >2018-08-02 10:59:24,102 INFO: dib-run-parts Sourcing environment file /tmp/tmpEcJxVu/install.d/../environment.d/14-manifests >2018-08-02 10:59:24,105 INFO: + source /tmp/tmpEcJxVu/install.d/../environment.d/14-manifests >2018-08-02 10:59:24,105 INFO: ++ export DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests >2018-08-02 10:59:24,106 INFO: ++ DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests >2018-08-02 10:59:24,106 INFO: ++ export DIB_MANIFEST_SAVE_DIR=instack.d/ >2018-08-02 10:59:24,106 INFO: ++ DIB_MANIFEST_SAVE_DIR=instack.d/ >2018-08-02 10:59:24,106 INFO: + set +o xtrace >2018-08-02 10:59:24,106 INFO: dib-run-parts Running /tmp/tmpEcJxVu/install.d/02-puppet-stack-config >2018-08-02 10:59:25,125 INFO: dib-run-parts 02-puppet-stack-config completed >2018-08-02 10:59:25,126 INFO: dib-run-parts Running /tmp/tmpEcJxVu/install.d/10-hiera-yaml-symlink >2018-08-02 10:59:25,131 INFO: + set -o pipefail >2018-08-02 10:59:25,131 INFO: + ln -f -s /etc/puppet/hiera.yaml /etc/hiera.yaml >2018-08-02 10:59:25,136 INFO: dib-run-parts 10-hiera-yaml-symlink completed >2018-08-02 10:59:25,136 INFO: dib-run-parts Running /tmp/tmpEcJxVu/install.d/10-puppet-stack-config-puppet-module >2018-08-02 10:59:25,142 INFO: + set -o pipefail >2018-08-02 10:59:25,142 INFO: + mkdir -p /etc/puppet/manifests >2018-08-02 10:59:25,146 INFO: ++ dirname /tmp/tmpEcJxVu/install.d/10-puppet-stack-config-puppet-module >2018-08-02 10:59:25,147 INFO: + cp /tmp/tmpEcJxVu/install.d/../puppet-stack-config.pp /etc/puppet/manifests/puppet-stack-config.pp >2018-08-02 10:59:25,152 INFO: dib-run-parts 10-puppet-stack-config-puppet-module completed >2018-08-02 10:59:25,152 INFO: dib-run-parts Running /tmp/tmpEcJxVu/install.d/11-create-template-root >2018-08-02 10:59:25,159 INFO: ++ os-apply-config --print-templates >2018-08-02 10:59:25,395 INFO: + TEMPLATE_ROOT=/usr/libexec/os-apply-config/templates >2018-08-02 10:59:25,395 INFO: + mkdir -p /usr/libexec/os-apply-config/templates >2018-08-02 10:59:25,401 INFO: dib-run-parts 11-create-template-root completed >2018-08-02 10:59:25,401 INFO: dib-run-parts Running /tmp/tmpEcJxVu/install.d/11-hiera-orc-install >2018-08-02 10:59:25,406 INFO: + set -o pipefail >2018-08-02 10:59:25,406 INFO: + mkdir -p /usr/libexec/os-refresh-config/configure.d/ >2018-08-02 10:59:25,410 INFO: ++ dirname /tmp/tmpEcJxVu/install.d/11-hiera-orc-install >2018-08-02 10:59:25,412 INFO: + install -m 0755 -o root -g root /tmp/tmpEcJxVu/install.d/../10-hiera-disable /usr/libexec/os-refresh-config/configure.d/10-hiera-disable >2018-08-02 10:59:25,423 INFO: ++ dirname /tmp/tmpEcJxVu/install.d/11-hiera-orc-install >2018-08-02 10:59:25,425 INFO: + install -m 0755 -o root -g root /tmp/tmpEcJxVu/install.d/../40-hiera-datafiles /usr/libexec/os-refresh-config/configure.d/40-hiera-datafiles >2018-08-02 10:59:25,436 INFO: dib-run-parts 11-hiera-orc-install completed >2018-08-02 10:59:25,436 INFO: dib-run-parts Running /tmp/tmpEcJxVu/install.d/75-puppet-modules-package >2018-08-02 10:59:25,442 INFO: + find /opt/stack/puppet-modules/ -mindepth 1 >2018-08-02 10:59:25,443 INFO: + read >2018-08-02 10:59:25,452 INFO: + ln -f -s /usr/share/openstack-puppet/modules/aodh /usr/share/openstack-puppet/modules/apache /usr/share/openstack-puppet/modules/archive /usr/share/openstack-puppet/modules/auditd /usr/share/openstack-puppet/modules/barbican /usr/share/openstack-puppet/modules/cassandra /usr/share/openstack-puppet/modules/ceilometer /usr/share/openstack-puppet/modules/ceph /usr/share/openstack-puppet/modules/certmonger /usr/share/openstack-puppet/modules/cinder /usr/share/openstack-puppet/modules/collectd /usr/share/openstack-puppet/modules/concat /usr/share/openstack-puppet/modules/contrail /usr/share/openstack-puppet/modules/corosync /usr/share/openstack-puppet/modules/datacat /usr/share/openstack-puppet/modules/designate /usr/share/openstack-puppet/modules/dns /usr/share/openstack-puppet/modules/ec2api /usr/share/openstack-puppet/modules/elasticsearch /usr/share/openstack-puppet/modules/fdio /usr/share/openstack-puppet/modules/firewall /usr/share/openstack-puppet/modules/fluentd /usr/share/openstack-puppet/modules/git /usr/share/openstack-puppet/modules/glance /usr/share/openstack-puppet/modules/gnocchi /usr/share/openstack-puppet/modules/haproxy /usr/share/openstack-puppet/modules/heat /usr/share/openstack-puppet/modules/horizon /usr/share/openstack-puppet/modules/inifile /usr/share/openstack-puppet/modules/ipaclient /usr/share/openstack-puppet/modules/ironic /usr/share/openstack-puppet/modules/java /usr/share/openstack-puppet/modules/kafka /usr/share/openstack-puppet/modules/keepalived /usr/share/openstack-puppet/modules/keystone /usr/share/openstack-puppet/modules/kibana3 /usr/share/openstack-puppet/modules/kmod /usr/share/openstack-puppet/modules/manila /usr/share/openstack-puppet/modules/memcached /usr/share/openstack-puppet/modules/midonet /usr/share/openstack-puppet/modules/mistral /usr/share/openstack-puppet/modules/module-data /usr/share/openstack-puppet/modules/mysql /usr/share/openstack-puppet/modules/n1k_vsm /usr/share/openstack-puppet/modules/neutron /usr/share/openstack-puppet/modules/nova /usr/share/openstack-puppet/modules/nssdb /usr/share/openstack-puppet/modules/ntp /usr/share/openstack-puppet/modules/octavia /usr/share/openstack-puppet/modules/opendaylight /usr/share/openstack-puppet/modules/openstack_extras /usr/share/openstack-puppet/modules/openstacklib /usr/share/openstack-puppet/modules/oslo /usr/share/openstack-puppet/modules/ovn /usr/share/openstack-puppet/modules/pacemaker /usr/share/openstack-puppet/modules/panko /usr/share/openstack-puppet/modules/rabbitmq /usr/share/openstack-puppet/modules/redis /usr/share/openstack-puppet/modules/remote /usr/share/openstack-puppet/modules/rsync /usr/share/openstack-puppet/modules/sahara /usr/share/openstack-puppet/modules/sensu /usr/share/openstack-puppet/modules/snmp /usr/share/openstack-puppet/modules/ssh /usr/share/openstack-puppet/modules/staging /usr/share/openstack-puppet/modules/stdlib /usr/share/openstack-puppet/modules/swift /usr/share/openstack-puppet/modules/sysctl /usr/share/openstack-puppet/modules/systemd /usr/share/openstack-puppet/modules/timezone /usr/share/openstack-puppet/modules/tomcat /usr/share/openstack-puppet/modules/tripleo /usr/share/openstack-puppet/modules/trove /usr/share/openstack-puppet/modules/uchiwa /usr/share/openstack-puppet/modules/vcsrepo /usr/share/openstack-puppet/modules/veritas_hyperscale /usr/share/openstack-puppet/modules/vswitch /usr/share/openstack-puppet/modules/xinetd /usr/share/openstack-puppet/modules/zaqar /usr/share/openstack-puppet/modules/zookeeper /etc/puppet/modules/ >2018-08-02 10:59:25,461 INFO: dib-run-parts 75-puppet-modules-package completed >2018-08-02 10:59:25,461 INFO: dib-run-parts Running /tmp/tmpEcJxVu/install.d/99-install-config-templates >2018-08-02 10:59:25,467 INFO: ++ os-apply-config --print-templates >2018-08-02 10:59:25,722 INFO: + TEMPLATE_ROOT=/usr/libexec/os-apply-config/templates >2018-08-02 10:59:25,723 INFO: ++ dirname /tmp/tmpEcJxVu/install.d/99-install-config-templates >2018-08-02 10:59:25,724 INFO: + TEMPLATE_SOURCE=/tmp/tmpEcJxVu/install.d/../os-apply-config >2018-08-02 10:59:25,725 INFO: + mkdir -p /usr/libexec/os-apply-config/templates >2018-08-02 10:59:25,727 INFO: + '[' -d /tmp/tmpEcJxVu/install.d/../os-apply-config ']' >2018-08-02 10:59:25,727 INFO: + rsync '--exclude=.*.swp' -Cr /tmp/tmpEcJxVu/install.d/../os-apply-config/ /usr/libexec/os-apply-config/templates/ >2018-08-02 10:59:25,740 INFO: dib-run-parts 99-install-config-templates completed >2018-08-02 10:59:25,740 INFO: dib-run-parts Running /tmp/tmpEcJxVu/install.d/99-os-refresh-config-install-scripts >2018-08-02 10:59:25,746 INFO: ++ os-refresh-config --print-base >2018-08-02 10:59:25,825 INFO: + SCRIPT_BASE=/usr/libexec/os-refresh-config >2018-08-02 10:59:25,826 INFO: ++ dirname /tmp/tmpEcJxVu/install.d/99-os-refresh-config-install-scripts >2018-08-02 10:59:25,828 INFO: + SCRIPT_SOURCE=/tmp/tmpEcJxVu/install.d/../os-refresh-config >2018-08-02 10:59:25,829 INFO: + rsync -r /tmp/tmpEcJxVu/install.d/../os-refresh-config/ /usr/libexec/os-refresh-config/ >2018-08-02 10:59:25,838 INFO: dib-run-parts 99-os-refresh-config-install-scripts completed >2018-08-02 10:59:25,839 INFO: dib-run-parts ----------------------- PROFILING ----------------------- >2018-08-02 10:59:25,839 INFO: dib-run-parts >2018-08-02 10:59:25,842 INFO: dib-run-parts Target: install.d >2018-08-02 10:59:25,842 INFO: dib-run-parts >2018-08-02 10:59:25,842 INFO: dib-run-parts Script Seconds >2018-08-02 10:59:25,842 INFO: dib-run-parts --------------------------------------- ---------- >2018-08-02 10:59:25,842 INFO: dib-run-parts >2018-08-02 10:59:25,855 INFO: dib-run-parts 02-puppet-stack-config 1.017 >2018-08-02 10:59:25,864 INFO: dib-run-parts 10-hiera-yaml-symlink 0.009 >2018-08-02 10:59:25,872 INFO: dib-run-parts 10-puppet-stack-config-puppet-module 0.014 >2018-08-02 10:59:25,881 INFO: dib-run-parts 11-create-template-root 0.246 >2018-08-02 10:59:25,890 INFO: dib-run-parts 11-hiera-orc-install 0.033 >2018-08-02 10:59:25,898 INFO: dib-run-parts 75-puppet-modules-package 0.023 >2018-08-02 10:59:25,906 INFO: dib-run-parts 99-install-config-templates 0.276 >2018-08-02 10:59:25,915 INFO: dib-run-parts 99-os-refresh-config-install-scripts 0.097 >2018-08-02 10:59:25,919 INFO: dib-run-parts >2018-08-02 10:59:25,919 INFO: dib-run-parts --------------------- END PROFILING --------------------- >2018-08-02 10:59:25,920 INFO: INFO: 2018-08-02 10:59:25,919 -- ############### End stdout/stderr logging ############### >2018-08-02 10:59:25,920 INFO: INFO: 2018-08-02 10:59:25,920 -- Running hook post-install >2018-08-02 10:59:25,921 INFO: INFO: 2018-08-02 10:59:25,920 -- Skipping hook post-install, the hook directory doesn't exist at /tmp/tmpEcJxVu/post-install.d >2018-08-02 10:59:25,926 INFO: INFO: 2018-08-02 10:59:25,925 -- Ending run of instack. >2018-08-02 10:59:25,946 INFO: Instack completed successfully >2018-08-02 10:59:25,946 INFO: Running os-refresh-config >2018-08-02 10:59:26,042 INFO: [2018-08-02 10:59:26,041] (os-refresh-config) [INFO] Starting phase configure >2018-08-02 10:59:26,059 INFO: dib-run-parts Thu Aug 2 10:59:26 EDT 2018 Running /usr/libexec/os-refresh-config/configure.d/10-hiera-disable >2018-08-02 10:59:26,063 INFO: + '[' -f /etc/puppet/hiera.yaml ']' >2018-08-02 10:59:26,063 INFO: + grep yaml /etc/puppet/hiera.yaml >2018-08-02 10:59:26,069 INFO: dib-run-parts Thu Aug 2 10:59:26 EDT 2018 10-hiera-disable completed >2018-08-02 10:59:26,071 INFO: dib-run-parts Thu Aug 2 10:59:26 EDT 2018 Running /usr/libexec/os-refresh-config/configure.d/20-os-apply-config >2018-08-02 10:59:26,292 INFO: [2018/08/02 10:59:26 AM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json >2018-08-02 10:59:26,307 INFO: [2018/08/02 10:59:26 AM] [INFO] writing /etc/os-net-config/config.json >2018-08-02 10:59:26,308 INFO: [2018/08/02 10:59:26 AM] [INFO] writing /root/stackrc >2018-08-02 10:59:26,309 INFO: [2018/08/02 10:59:26 AM] [INFO] writing /root/tripleo-undercloud-passwords >2018-08-02 10:59:26,309 INFO: [2018/08/02 10:59:26 AM] [INFO] writing /etc/puppet/hiera.yaml >2018-08-02 10:59:26,310 INFO: [2018/08/02 10:59:26 AM] [INFO] writing /var/opt/undercloud-stack/masquerade >2018-08-02 10:59:26,311 INFO: [2018/08/02 10:59:26 AM] [INFO] writing /etc/puppet/hieradata/RedHat.yaml >2018-08-02 10:59:26,312 INFO: [2018/08/02 10:59:26 AM] [INFO] writing /etc/puppet/hieradata/CentOS.yaml >2018-08-02 10:59:26,312 INFO: [2018/08/02 10:59:26 AM] [INFO] writing /var/run/heat-config/heat-config >2018-08-02 10:59:26,313 INFO: [2018/08/02 10:59:26 AM] [INFO] writing /etc/os-collect-config.conf >2018-08-02 10:59:26,314 INFO: [2018/08/02 10:59:26 AM] [INFO] success >2018-08-02 10:59:26,331 INFO: dib-run-parts Thu Aug 2 10:59:26 EDT 2018 20-os-apply-config completed >2018-08-02 10:59:26,333 INFO: dib-run-parts Thu Aug 2 10:59:26 EDT 2018 Running /usr/libexec/os-refresh-config/configure.d/30-reload-keepalived >2018-08-02 10:59:26,337 INFO: + systemctl is-enabled keepalived >2018-08-02 10:59:26,354 INFO: enabled >2018-08-02 10:59:26,354 INFO: + systemctl reload keepalived >2018-08-02 10:59:26,403 INFO: dib-run-parts Thu Aug 2 10:59:26 EDT 2018 30-reload-keepalived completed >2018-08-02 10:59:26,405 INFO: dib-run-parts Thu Aug 2 10:59:26 EDT 2018 Running /usr/libexec/os-refresh-config/configure.d/40-hiera-datafiles >2018-08-02 10:59:26,657 INFO: [2018/08/02 10:59:26 AM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json >2018-08-02 10:59:26,683 INFO: dib-run-parts Thu Aug 2 10:59:26 EDT 2018 40-hiera-datafiles completed >2018-08-02 10:59:26,686 INFO: dib-run-parts Thu Aug 2 10:59:26 EDT 2018 Running /usr/libexec/os-refresh-config/configure.d/50-puppet-stack-config >2018-08-02 10:59:26,690 INFO: + set -o pipefail >2018-08-02 10:59:26,691 INFO: + puppet_apply puppet apply --summarize --detailed-exitcodes /etc/puppet/manifests/puppet-stack-config.pp >2018-08-02 10:59:26,691 INFO: + set +e >2018-08-02 10:59:26,691 INFO: + puppet apply --summarize --detailed-exitcodes /etc/puppet/manifests/puppet-stack-config.pp >2018-08-02 10:59:36,383 INFO: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) >2018-08-02 10:59:36,848 INFO: Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend >2018-08-02 10:59:36,988 INFO: Warning: ModuleLoader: module 'openstacklib' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-08-02 10:59:36,988 INFO: (file & line not available) >2018-08-02 10:59:37,425 INFO: Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend >2018-08-02 10:59:37,513 INFO: Warning: This method is deprecated, please use the stdlib validate_legacy function, >2018-08-02 10:59:37,513 INFO: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 54]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-08-02 10:59:37,514 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation') >2018-08-02 10:59:37,555 INFO: Warning: This method is deprecated, please use the stdlib validate_legacy function, >2018-08-02 10:59:37,556 INFO: with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 55]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-08-02 10:59:37,556 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation') >2018-08-02 10:59:37,634 INFO: Warning: This method is deprecated, please use the stdlib validate_legacy function, >2018-08-02 10:59:37,635 INFO: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 56]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-08-02 10:59:37,636 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation') >2018-08-02 10:59:37,660 INFO: Warning: This method is deprecated, please use the stdlib validate_legacy function, >2018-08-02 10:59:37,661 INFO: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 66]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-08-02 10:59:37,661 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation') >2018-08-02 10:59:37,665 INFO: Warning: This method is deprecated, please use the stdlib validate_legacy function, >2018-08-02 10:59:37,666 INFO: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 68]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-08-02 10:59:37,666 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation') >2018-08-02 10:59:37,676 INFO: Warning: This method is deprecated, please use the stdlib validate_legacy function, >2018-08-02 10:59:37,677 INFO: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 76]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-08-02 10:59:37,677 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation') >2018-08-02 10:59:38,142 INFO: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/rabbitmq/manifests/install/rabbitmqadmin.pp", 37]:["/etc/puppet/modules/rabbitmq/manifests/init.pp", 316] >2018-08-02 10:59:38,142 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation') >2018-08-02 10:59:38,381 INFO: Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked. >2018-08-02 10:59:39,394 INFO: Warning: notify is a metaparam; this value will inherit to all contained resources in the keepalived::instance definition >2018-08-02 10:59:39,434 INFO: Warning: This method is deprecated, please use the stdlib validate_legacy function, >2018-08-02 10:59:39,435 INFO: with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp", 103]:["/etc/puppet/manifests/puppet-stack-config.pp", 97] >2018-08-02 10:59:39,435 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation') >2018-08-02 10:59:39,569 INFO: Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-08-02 10:59:39,570 INFO: (file & line not available) >2018-08-02 10:59:40,059 INFO: Warning: ModuleLoader: module 'keystone' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-08-02 10:59:40,060 INFO: (file & line not available) >2018-08-02 10:59:40,971 INFO: Warning: ModuleLoader: module 'glance' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-08-02 10:59:40,972 INFO: (file & line not available) >2018-08-02 10:59:41,233 INFO: Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-08-02 10:59:41,234 INFO: (file & line not available) >2018-08-02 10:59:41,461 INFO: Warning: Unknown variable: '::nova::db::mysql_api::setup_cell0'. at /etc/puppet/modules/nova/manifests/db/mysql.pp:53:28 >2018-08-02 10:59:41,506 INFO: Warning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-08-02 10:59:41,506 INFO: (file & line not available) >2018-08-02 10:59:42,476 INFO: Warning: ModuleLoader: module 'heat' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-08-02 10:59:42,476 INFO: (file & line not available) >2018-08-02 10:59:42,560 INFO: Warning: ModuleLoader: module 'ironic' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-08-02 10:59:42,561 INFO: (file & line not available) >2018-08-02 10:59:42,794 INFO: Warning: ModuleLoader: module 'swift' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-08-02 10:59:42,795 INFO: (file & line not available) >2018-08-02 10:59:43,170 INFO: Warning: Scope(Class[Keystone]): keystone::rabbit_host, keystone::rabbit_hosts, keystone::rabbit_password, keystone::rabbit_port, keystone::rabbit_userid and keystone::rabbit_virtual_host are deprecated. Please use keystone::default_transport_url instead. >2018-08-02 10:59:45,243 INFO: Warning: Scope(Class[Glance::Notify::Rabbitmq]): glance::notify::rabbitmq::rabbit_host, glance::notify::rabbitmq::rabbit_hosts, glance::notify::rabbitmq::rabbit_password, glance::notify::rabbitmq::rabbit_port, glance::notify::rabbitmq::rabbit_userid and glance::notify::rabbitmq::rabbit_virtual_host are deprecated. Please use glance::notify::rabbitmq::default_transport_url instead. >2018-08-02 10:59:45,349 INFO: Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release >2018-08-02 10:59:45,350 INFO: Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release >2018-08-02 10:59:45,660 INFO: Warning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-08-02 10:59:45,661 INFO: (file & line not available) >2018-08-02 10:59:46,104 INFO: Warning: Unknown variable: 'until_complete_real'. at /etc/puppet/modules/nova/manifests/cron/archive_deleted_rows.pp:77:82 >2018-08-02 10:59:46,162 INFO: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/nova/manifests/scheduler/filter.pp", 140]:["/etc/puppet/manifests/puppet-stack-config.pp", 396] >2018-08-02 10:59:46,162 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation') >2018-08-02 10:59:46,439 INFO: Warning: Scope(Class[Neutron]): neutron::rabbit_host, neutron::rabbit_hosts, neutron::rabbit_password, neutron::rabbit_port, neutron::rabbit_user, neutron::rabbit_virtual_host and neutron::rpc_backend are deprecated. Please use neutron::default_transport_url instead. >2018-08-02 10:59:47,619 INFO: Warning: Unknown variable: 'methods_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:100:56 >2018-08-02 10:59:47,619 INFO: Warning: Unknown variable: 'incoming_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:101:56 >2018-08-02 10:59:47,620 INFO: Warning: Unknown variable: 'incoming_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:102:56 >2018-08-02 10:59:47,620 INFO: Warning: Unknown variable: 'outgoing_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:103:56 >2018-08-02 10:59:47,620 INFO: Warning: Unknown variable: 'outgoing_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:104:56 >2018-08-02 10:59:47,710 INFO: Warning: Scope(Class[Swift::Storage::All]): The default port for the object storage server has changed from 6000 to 6200 and will be changed in a later release >2018-08-02 10:59:47,711 INFO: Warning: Scope(Class[Swift::Storage::All]): The default port for the container storage server has changed from 6001 to 6201 and will be changed in a later release >2018-08-02 10:59:47,711 INFO: Warning: Scope(Class[Swift::Storage::All]): The default port for the account storage server has changed from 6002 to 6202 and will be changed in a later release >2018-08-02 10:59:48,187 INFO: Warning: This method is deprecated, please use the stdlib validate_legacy function, >2018-08-02 10:59:48,188 INFO: with Stdlib::Compat::Integer. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/heat/manifests/wsgi/apache_api_cfn.pp", 125]:["/etc/puppet/manifests/puppet-stack-config.pp", 517] >2018-08-02 10:59:48,188 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation') >2018-08-02 10:59:48,600 INFO: Warning: Unknown variable: '::ironic::conductor::swift_account'. at /etc/puppet/modules/ironic/manifests/glance.pp:117:30 >2018-08-02 10:59:48,601 INFO: Warning: Unknown variable: '::ironic::conductor::swift_temp_url_key'. at /etc/puppet/modules/ironic/manifests/glance.pp:118:35 >2018-08-02 10:59:48,601 INFO: Warning: Unknown variable: '::ironic::conductor::swift_temp_url_duration'. at /etc/puppet/modules/ironic/manifests/glance.pp:119:40 >2018-08-02 10:59:48,629 INFO: Warning: Unknown variable: '::ironic::api::neutron_url'. at /etc/puppet/modules/ironic/manifests/neutron.pp:58:29 >2018-08-02 10:59:49,822 INFO: Warning: ModuleLoader: module 'mistral' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-08-02 10:59:49,823 INFO: (file & line not available) >2018-08-02 10:59:50,062 INFO: Warning: Unknown variable: '::mistral::database_idle_timeout'. at /etc/puppet/modules/mistral/manifests/db.pp:57:40 >2018-08-02 10:59:50,063 INFO: Warning: Unknown variable: '::mistral::database_min_pool_size'. at /etc/puppet/modules/mistral/manifests/db.pp:58:40 >2018-08-02 10:59:50,065 INFO: Warning: Unknown variable: '::mistral::database_max_pool_size'. at /etc/puppet/modules/mistral/manifests/db.pp:59:40 >2018-08-02 10:59:50,066 INFO: Warning: Unknown variable: '::mistral::database_max_retries'. at /etc/puppet/modules/mistral/manifests/db.pp:60:40 >2018-08-02 10:59:50,066 INFO: Warning: Unknown variable: '::mistral::database_retry_interval'. at /etc/puppet/modules/mistral/manifests/db.pp:61:40 >2018-08-02 10:59:50,067 INFO: Warning: Unknown variable: '::mistral::database_max_overflow'. at /etc/puppet/modules/mistral/manifests/db.pp:62:40 >2018-08-02 10:59:50,137 INFO: Warning: Scope(Class[Mistral]): mistral::rabbit_host, mistral::rabbit_hosts, mistral::rabbit_password, mistral::rabbit_port, mistral::rabbit_userid, mistral::rabbit_virtual_host and mistral::rpc_backend are deprecated. Please use mistral::default_transport_url instead. >2018-08-02 10:59:50,336 INFO: Warning: ModuleLoader: module 'zaqar' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-08-02 10:59:50,337 INFO: (file & line not available) >2018-08-02 10:59:52,094 INFO: Warning: ModuleLoader: module 'oslo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-08-02 10:59:52,094 INFO: (file & line not available) >2018-08-02 10:59:52,246 INFO: Warning: Scope(Oslo::Messaging::Rabbit[keystone_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead. >2018-08-02 10:59:53,504 INFO: Warning: Scope(Oslo::Messaging::Rabbit[glance_api_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead. >2018-08-02 10:59:53,520 INFO: Warning: Scope(Oslo::Messaging::Rabbit[glance_registry_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead. >2018-08-02 10:59:53,687 INFO: Warning: Scope(Oslo::Messaging::Rabbit[neutron_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead. >2018-08-02 10:59:53,868 INFO: Warning: Scope(Neutron::Plugins::Ml2::Type_driver[local]): local type_driver is useful only for single-box, because it provides no connectivity between hosts >2018-08-02 10:59:54,580 INFO: Warning: Scope(Oslo::Messaging::Rabbit[mistral_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead. >2018-08-02 10:59:55,255 INFO: Warning: Scope(Haproxy::Config[haproxy]): haproxy: The $merge_options parameter will default to true in the next major release. Please review the documentation regarding the implications. >2018-08-02 11:00:00,565 INFO: Notice: Compiled catalog for undercloud-0.redhat.local in environment production in 23.99 seconds >2018-08-02 11:00:12,637 INFO: Notice: /Stage[setup]/Vswitch::Ovs/Service[openvswitch]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:00:13,648 INFO: Notice: /Stage[setup]/Tripleo::Network::Os_net_config/Exec[os-net-config]/returns: executed successfully >2018-08-02 11:00:13,694 INFO: Notice: /Stage[setup]/Tripleo::Network::Os_net_config/Exec[trigger-keepalived-restart]: Triggered 'refresh' from 1 events >2018-08-02 11:00:14,906 INFO: Notice: /Stage[main]/Main/File[/etc/systemd/system/mariadb.service.d]/seltype: seltype changed 'mysqld_unit_file_t' to 'systemd_unit_file_t' >2018-08-02 11:00:18,771 INFO: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created >2018-08-02 11:00:19,506 INFO: Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/value: value changed 'local,flat,vlan,gre,vxlan' to 'local,flat,vlan,gre,vxlan,geneve' >2018-08-02 11:00:19,689 INFO: Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_facility]/ensure: created >2018-08-02 11:00:19,691 INFO: Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_level]/ensure: created >2018-08-02 11:00:24,838 INFO: Notice: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d]/seltype: seltype changed 'container_unit_file_t' to 'systemd_unit_file_t' >2018-08-02 11:00:25,037 INFO: Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-registry]/returns: executed successfully >2018-08-02 11:00:27,881 INFO: Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Service[docker]: Triggered 'refresh' from 1 events >2018-08-02 11:00:29,606 INFO: Notice: /Stage[main]/Keepalived::Config/Concat[/etc/keepalived/keepalived.conf]/File[/etc/keepalived/keepalived.conf]/content: content changed '{md5}049faf10e507da2cda8e385f4ad3340c' to '{md5}7f4d7704731cefc4b001d16d31134227' >2018-08-02 11:00:33,830 INFO: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created >2018-08-02 11:00:42,079 INFO: Notice: /Stage[main]/Nova::Compute::Ironic/Nova_config[DEFAULT/max_concurrent_builds]/ensure: removed >2018-08-02 11:00:55,889 INFO: Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[geneve]/Neutron_plugin_ml2[ml2_type_geneve/vni_ranges]/ensure: created >2018-08-02 11:00:55,891 INFO: Notice: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]: Triggered 'refresh' from 3 events >2018-08-02 11:00:58,847 INFO: Notice: /Stage[main]/Nova::Deps/Anchor[nova::config::end]: Triggered 'refresh' from 2 events >2018-08-02 11:01:07,320 INFO: Notice: /Stage[main]/Keepalived::Service/Service[keepalived]: Triggered 'refresh' from 1 events >2018-08-02 11:01:14,877 INFO: Notice: /Stage[main]/Nova::Db::Sync_api/Exec[nova-db-sync-api]/returns: executed successfully >2018-08-02 11:01:18,937 INFO: Notice: /Stage[main]/Nova::Db::Sync_api/Exec[nova-db-sync-api]: Triggered 'refresh' from 1 events >2018-08-02 11:01:18,938 INFO: Notice: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::end]: Triggered 'refresh' from 2 events >2018-08-02 11:01:18,938 INFO: Notice: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]: Triggered 'refresh' from 1 events >2018-08-02 11:01:19,491 INFO: Notice: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]: Triggered 'refresh' from 1 events >2018-08-02 11:01:23,773 INFO: Notice: /Stage[main]/Nova::Cell_v2::Map_cell0/Exec[nova-cell_v2-map_cell0]: Triggered 'refresh' from 1 events >2018-08-02 11:01:23,777 INFO: Notice: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::end]: Triggered 'refresh' from 1 events >2018-08-02 11:01:28,252 INFO: Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/www_authenticate_uri]/ensure: created >2018-08-02 11:01:30,293 INFO: Notice: /Stage[main]/Swift::Deps/Anchor[swift::config::end]: Triggered 'refresh' from 1 events >2018-08-02 11:01:30,294 INFO: Notice: /Stage[main]/Swift::Deps/Anchor[swift::service::begin]: Triggered 'refresh' from 2 events >2018-08-02 11:01:32,852 INFO: Notice: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]: Triggered 'refresh' from 2 events >2018-08-02 11:01:34,474 INFO: Notice: /Stage[main]/Glance::Db::Sync/Exec[glance-manage db_sync]/returns: executed successfully >2018-08-02 11:01:34,475 INFO: Notice: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::end]: Triggered 'refresh' from 1 events >2018-08-02 11:01:36,112 INFO: Notice: /Stage[main]/Glance::Db::Metadefs/Exec[glance-manage db_load_metadefs]: Triggered 'refresh' from 1 events >2018-08-02 11:01:40,431 INFO: Notice: /Stage[main]/Nova::Db::Sync/Exec[nova-db-sync]/returns: executed successfully >2018-08-02 11:01:44,829 INFO: Notice: /Stage[main]/Nova::Db::Sync/Exec[nova-db-sync]: Triggered 'refresh' from 2 events >2018-08-02 11:01:44,831 INFO: Notice: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::end]: Triggered 'refresh' from 2 events >2018-08-02 11:01:46,861 INFO: Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: executed successfully >2018-08-02 11:01:48,761 INFO: Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]: Triggered 'refresh' from 1 events >2018-08-02 11:01:48,763 INFO: Notice: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]: Triggered 'refresh' from 2 events >2018-08-02 11:01:48,764 INFO: Notice: /Stage[main]/Neutron::Deps/Anchor[neutron::service::begin]: Triggered 'refresh' from 2 events >2018-08-02 11:01:49,369 INFO: Notice: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:01:49,945 INFO: Notice: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:01:50,514 INFO: Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:01:52,927 INFO: Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-destroy-patch-ports-service]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:01:54,376 INFO: Notice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: executed successfully >2018-08-02 11:01:54,379 INFO: Notice: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::end]: Triggered 'refresh' from 1 events >2018-08-02 11:01:56,124 INFO: Notice: /Stage[main]/Ironic::Db::Sync/Exec[ironic-dbsync]/returns: executed successfully >2018-08-02 11:01:56,124 INFO: Notice: /Stage[main]/Ironic::Deps/Anchor[ironic::dbsync::end]: Triggered 'refresh' from 1 events >2018-08-02 11:01:56,128 INFO: Notice: /Stage[main]/Ironic::Deps/Anchor[ironic::db_online_data_migrations::begin]: Triggered 'refresh' from 1 events >2018-08-02 11:02:00,211 INFO: Notice: /Stage[main]/Ironic::Db::Online_data_migrations/Exec[ironic-db-online-data-migrations]/returns: executed successfully >2018-08-02 11:02:04,181 INFO: Notice: /Stage[main]/Ironic::Db::Online_data_migrations/Exec[ironic-db-online-data-migrations]: Triggered 'refresh' from 2 events >2018-08-02 11:02:04,182 INFO: Notice: /Stage[main]/Ironic::Deps/Anchor[ironic::db_online_data_migrations::end]: Triggered 'refresh' from 2 events >2018-08-02 11:02:04,186 INFO: Notice: /Stage[main]/Ironic::Deps/Anchor[ironic::service::begin]: Triggered 'refresh' from 1 events >2018-08-02 11:02:04,389 INFO: Notice: /Stage[main]/Ironic::Api/Service[ironic-api]: Triggered 'refresh' from 1 events >2018-08-02 11:02:04,978 INFO: Notice: /Stage[main]/Ironic::Conductor/Service[ironic-conductor]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:02:04,979 INFO: Notice: /Stage[main]/Ironic::Deps/Anchor[ironic::service::end]: Triggered 'refresh' from 2 events >2018-08-02 11:02:07,350 INFO: Notice: /Stage[main]/Mistral::Db::Sync/Exec[mistral-db-sync]/returns: executed successfully >2018-08-02 11:02:07,596 INFO: Notice: /Stage[main]/Nova::Deps/Anchor[nova::service::begin]: Triggered 'refresh' from 3 events >2018-08-02 11:02:07,598 INFO: Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::begin]: Triggered 'refresh' from 1 events >2018-08-02 11:02:07,791 INFO: Notice: /Stage[main]/Heat::Api/Service[heat-api]: Triggered 'refresh' from 1 events >2018-08-02 11:02:07,999 INFO: Notice: /Stage[main]/Heat::Api_cfn/Service[heat-api-cfn]: Triggered 'refresh' from 1 events >2018-08-02 11:02:08,630 INFO: Notice: /Stage[main]/Heat::Engine/Service[heat-engine]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:02:14,925 INFO: Notice: /Stage[main]/Nova::Conductor/Nova::Generic_service[conductor]/Service[nova-conductor]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:02:19,375 INFO: Notice: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Service[nova-scheduler]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:02:20,758 INFO: Notice: /Stage[main]/Main/Zaqar::Server_instance[1]/Service[openstack-zaqar@1]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:02:20,759 INFO: Notice: /Stage[main]/Zaqar::Deps/Anchor[zaqar::service::end]: Triggered 'refresh' from 1 events >2018-08-02 11:02:23,894 INFO: Notice: /Stage[main]/Keystone::Db::Sync/Exec[keystone-manage db_sync]/returns: executed successfully >2018-08-02 11:02:23,896 INFO: Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]: Triggered 'refresh' from 1 events >2018-08-02 11:02:26,254 INFO: Notice: /Stage[main]/Keystone/Exec[keystone-manage bootstrap]: Triggered 'refresh' from 1 events >2018-08-02 11:02:26,258 INFO: Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::service::begin]: Triggered 'refresh' from 2 events >2018-08-02 11:02:26,946 INFO: Notice: /Stage[main]/Apache::Service/Service[httpd]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:03:09,350 INFO: Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Triggered 'refresh' from 3 events >2018-08-02 11:03:19,644 INFO: Notice: /Stage[main]/Neutron::Server/Service[neutron-server]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:03:26,492 INFO: Notice: /Stage[main]/Glance::Deps/Anchor[glance::service::begin]: Triggered 'refresh' from 1 events >2018-08-02 11:04:05,935 INFO: Notice: /Stage[main]/Nova::Api/Nova::Generic_service[api]/Service[nova-api]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:04:06,579 INFO: Notice: /Stage[main]/Swift::Proxy/Swift::Service[swift-proxy-server]/Service[swift-proxy-server]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:04:07,223 INFO: Notice: /Stage[main]/Swift::Objectexpirer/Swift::Service[swift-object-expirer]/Service[swift-object-expirer]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:04:28,033 INFO: Notice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Service[ironic-neutron-agent-service]: Triggered 'refresh' from 1 events >2018-08-02 11:04:28,035 INFO: Notice: /Stage[main]/Neutron::Deps/Anchor[neutron::service::end]: Triggered 'refresh' from 6 events >2018-08-02 11:04:28,699 INFO: Notice: /Stage[main]/Ironic::Inspector/Service[ironic-inspector]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:04:29,368 INFO: Notice: /Stage[main]/Ironic::Inspector/Service[ironic-inspector-dnsmasq]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:04:29,370 INFO: Notice: /Stage[main]/Ironic::Deps/Anchor[ironic-inspector::service::end]: Triggered 'refresh' from 1 events >2018-08-02 11:04:41,991 INFO: Notice: /Stage[main]/Mistral::Db::Sync/Exec[mistral-db-populate]/returns: executed successfully >2018-08-02 11:04:54,615 INFO: Notice: /Stage[main]/Mistral::Db::Sync/Exec[mistral-db-populate]: Triggered 'refresh' from 1 events >2018-08-02 11:04:54,619 INFO: Notice: /Stage[main]/Mistral::Deps/Anchor[mistral::dbsync::end]: Triggered 'refresh' from 3 events >2018-08-02 11:04:54,619 INFO: Notice: /Stage[main]/Mistral::Deps/Anchor[mistral::service::begin]: Triggered 'refresh' from 1 events >2018-08-02 11:04:55,366 INFO: Notice: /Stage[main]/Mistral::Api/Service[mistral-api]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:04:56,015 INFO: Notice: /Stage[main]/Mistral::Engine/Service[mistral-engine]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:04:56,706 INFO: Notice: /Stage[main]/Mistral::Executor/Service[mistral-executor]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:04:56,706 INFO: Notice: /Stage[main]/Mistral::Deps/Anchor[mistral::service::end]: Triggered 'refresh' from 3 events >2018-08-02 11:05:01,505 INFO: Notice: /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Service[nova-compute]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:05:01,506 INFO: Notice: /Stage[main]/Nova::Deps/Anchor[nova::service::end]: Triggered 'refresh' from 4 events >2018-08-02 11:05:05,679 INFO: Notice: /Stage[main]/Nova::Cell_v2::Discover_hosts/Exec[nova-cell_v2-discover_hosts]: Triggered 'refresh' from 1 events >2018-08-02 11:05:06,453 INFO: Notice: /Stage[main]/Swift::Storage::Account/Swift::Service[swift-account-reaper]/Service[swift-account-reaper]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:05:07,146 INFO: Notice: /Stage[main]/Swift::Storage::Container/Swift::Service[swift-container-updater]/Service[swift-container-updater]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:05:07,826 INFO: Notice: /Stage[main]/Swift::Storage::Container/Swift::Service[swift-container-sync]/Service[swift-container-sync]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:05:08,514 INFO: Notice: /Stage[main]/Swift::Storage::Object/Swift::Service[swift-object-updater]/Service[swift-object-updater]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:05:09,185 INFO: Notice: /Stage[main]/Swift::Storage::Object/Swift::Service[swift-object-reconstructor]/Service[swift-object-reconstructor]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:05:09,847 INFO: Notice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/Swift::Service[swift-account-server]/Service[swift-account-server]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:05:10,045 INFO: Notice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/Swift::Service[swift-account-replicator]/Service[swift-account-replicator]: Triggered 'refresh' from 1 events >2018-08-02 11:05:10,254 INFO: Notice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/Swift::Service[swift-account-auditor]/Service[swift-account-auditor]: Triggered 'refresh' from 1 events >2018-08-02 11:05:10,943 INFO: Notice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/Swift::Service[swift-container-server]/Service[swift-container-server]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:05:11,136 INFO: Notice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/Swift::Service[swift-container-replicator]/Service[swift-container-replicator]: Triggered 'refresh' from 1 events >2018-08-02 11:05:11,342 INFO: Notice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/Swift::Service[swift-container-auditor]/Service[swift-container-auditor]: Triggered 'refresh' from 1 events >2018-08-02 11:05:11,999 INFO: Notice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/Swift::Service[swift-object-server]/Service[swift-object-server]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:05:12,197 INFO: Notice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/Swift::Service[swift-object-replicator]/Service[swift-object-replicator]: Triggered 'refresh' from 1 events >2018-08-02 11:05:12,413 INFO: Notice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/Swift::Service[swift-object-auditor]/Service[swift-object-auditor]: Triggered 'refresh' from 1 events >2018-08-02 11:05:12,414 INFO: Notice: /Stage[main]/Swift::Deps/Anchor[swift::service::end]: Triggered 'refresh' from 16 events >2018-08-02 11:05:13,764 INFO: Notice: /Stage[main]/Glance::Api/Service[glance-api]/ensure: ensure changed 'stopped' to 'running' >2018-08-02 11:05:13,775 INFO: Notice: /Stage[main]/Glance::Deps/Anchor[glance::service::end]: Triggered 'refresh' from 1 events >2018-08-02 11:05:16,394 INFO: Notice: Applied catalog in 310.02 seconds >2018-08-02 11:05:17,101 INFO: Changes: >2018-08-02 11:05:17,101 INFO: Total: 53 >2018-08-02 11:05:17,101 INFO: Events: >2018-08-02 11:05:17,102 INFO: Success: 53 >2018-08-02 11:05:17,102 INFO: Total: 53 >2018-08-02 11:05:17,102 INFO: Resources: >2018-08-02 11:05:17,102 INFO: Total: 2905 >2018-08-02 11:05:17,102 INFO: Corrective change: 44 >2018-08-02 11:05:17,102 INFO: Out of sync: 53 >2018-08-02 11:05:17,103 INFO: Changed: 53 >2018-08-02 11:05:17,103 INFO: Restarted: 56 >2018-08-02 11:05:17,103 INFO: Time: >2018-08-02 11:05:17,103 INFO: Policy rcd: 0.00 >2018-08-02 11:05:17,103 INFO: Archive: 0.00 >2018-08-02 11:05:17,103 INFO: Keystone domain: 0.00 >2018-08-02 11:05:17,103 INFO: Schedule: 0.00 >2018-08-02 11:05:17,103 INFO: Sysctl: 0.00 >2018-08-02 11:05:17,104 INFO: Nova cell v2: 0.00 >2018-08-02 11:05:17,104 INFO: Sysctl runtime: 0.00 >2018-08-02 11:05:17,104 INFO: Mysql datadir: 0.00 >2018-08-02 11:05:17,104 INFO: Keystone role: 0.00 >2018-08-02 11:05:17,104 INFO: Neutron api config: 0.00 >2018-08-02 11:05:17,104 INFO: Group: 0.00 >2018-08-02 11:05:17,105 INFO: Resources: 0.00 >2018-08-02 11:05:17,105 INFO: Keystone tenant: 0.00 >2018-08-02 11:05:17,105 INFO: Swift config: 0.00 >2018-08-02 11:05:17,105 INFO: Cron: 0.00 >2018-08-02 11:05:17,105 INFO: Glance swift config: 0.00 >2018-08-02 11:05:17,105 INFO: User: 0.00 >2018-08-02 11:05:17,105 INFO: Mysql database: 0.00 >2018-08-02 11:05:17,106 INFO: Concat file: 0.00 >2018-08-02 11:05:17,106 INFO: Nova paste api ini: 0.01 >2018-08-02 11:05:17,106 INFO: Keystone service: 0.01 >2018-08-02 11:05:17,106 INFO: Mysql grant: 0.01 >2018-08-02 11:05:17,106 INFO: Keystone endpoint: 0.01 >2018-08-02 11:05:17,106 INFO: Ironic neutron agent config: 0.01 >2018-08-02 11:05:17,107 INFO: Swift object expirer config: 0.01 >2018-08-02 11:05:17,107 INFO: Mysql user: 0.01 >2018-08-02 11:05:17,107 INFO: Neutron l3 agent config: 0.01 >2018-08-02 11:05:17,107 INFO: Anchor: 0.02 >2018-08-02 11:05:17,107 INFO: Concat fragment: 0.02 >2018-08-02 11:05:17,107 INFO: Neutron plugin ml2: 0.02 >2018-08-02 11:05:17,107 INFO: Neutron dhcp agent config: 0.02 >2018-08-02 11:05:17,108 INFO: Neutron agent ovs: 0.02 >2018-08-02 11:05:17,108 INFO: Firewall: 0.06 >2018-08-02 11:05:17,108 INFO: Swift proxy config: 0.09 >2018-08-02 11:05:17,108 INFO: Vs bridge: 0.11 >2018-08-02 11:05:17,108 INFO: Glance registry config: 0.20 >2018-08-02 11:05:17,108 INFO: Glance cache config: 0.35 >2018-08-02 11:05:17,109 INFO: Ring container device: 0.44 >2018-08-02 11:05:17,109 INFO: Ring account device: 0.45 >2018-08-02 11:05:17,109 INFO: Ring object device: 0.46 >2018-08-02 11:05:17,109 INFO: Mistral config: 0.59 >2018-08-02 11:05:17,109 INFO: Zaqar config: 0.72 >2018-08-02 11:05:17,109 INFO: Augeas: 1.03 >2018-08-02 11:05:17,110 INFO: Ironic inspector config: 1.05 >2018-08-02 11:05:17,110 INFO: Package: 1.69 >2018-08-02 11:05:17,110 INFO: Rabbitmq plugin: 1.70 >2018-08-02 11:05:17,110 INFO: File: 1.78 >2018-08-02 11:05:17,110 INFO: Last run: 1533222317 >2018-08-02 11:05:17,110 INFO: Neutron config: 2.69 >2018-08-02 11:05:17,110 INFO: Nova config: 20.37 >2018-08-02 11:05:17,111 INFO: Total: 263.26 >2018-08-02 11:05:17,111 INFO: Config retrieval: 29.35 >2018-08-02 11:05:17,111 INFO: Glance api config: 3.28 >2018-08-02 11:05:17,111 INFO: Heat config: 3.45 >2018-08-02 11:05:17,111 INFO: Keystone config: 3.78 >2018-08-02 11:05:17,111 INFO: Exec: 38.30 >2018-08-02 11:05:17,111 INFO: Keystone user role: 38.68 >2018-08-02 11:05:17,112 INFO: Service: 43.53 >2018-08-02 11:05:17,112 INFO: Ironic config: 6.41 >2018-08-02 11:05:17,112 INFO: Keystone user: 62.51 >2018-08-02 11:05:17,112 INFO: Filebucket: 0.00 >2018-08-02 11:05:17,112 INFO: Version: >2018-08-02 11:05:17,112 INFO: Config: 1533221976 >2018-08-02 11:05:17,112 INFO: Puppet: 4.8.2 >2018-08-02 11:05:28,327 INFO: + rc=2 >2018-08-02 11:05:28,327 INFO: + set -e >2018-08-02 11:05:28,327 INFO: + echo 'puppet apply exited with exit code 2' >2018-08-02 11:05:28,327 INFO: puppet apply exited with exit code 2 >2018-08-02 11:05:28,328 INFO: + '[' 2 '!=' 2 -a 2 '!=' 0 ']' >2018-08-02 11:05:28,331 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 50-puppet-stack-config completed >2018-08-02 11:05:28,334 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 ----------------------- PROFILING ----------------------- >2018-08-02 11:05:28,336 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 >2018-08-02 11:05:28,340 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 Target: configure.d >2018-08-02 11:05:28,342 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 >2018-08-02 11:05:28,344 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 Script Seconds >2018-08-02 11:05:28,345 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 --------------------------------------- ---------- >2018-08-02 11:05:28,347 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 >2018-08-02 11:05:28,362 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 10-hiera-disable 0.007 >2018-08-02 11:05:28,371 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 20-os-apply-config 0.256 >2018-08-02 11:05:28,381 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 30-reload-keepalived 0.066 >2018-08-02 11:05:28,391 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 40-hiera-datafiles 0.274 >2018-08-02 11:05:28,400 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 50-puppet-stack-config 361.642 >2018-08-02 11:05:28,405 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 >2018-08-02 11:05:28,406 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 --------------------- END PROFILING --------------------- >2018-08-02 11:05:28,407 INFO: [2018-08-02 11:05:28,407] (os-refresh-config) [INFO] Completed phase configure >2018-08-02 11:05:28,408 INFO: [2018-08-02 11:05:28,407] (os-refresh-config) [INFO] Starting phase post-configure >2018-08-02 11:05:28,427 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 Running /usr/libexec/os-refresh-config/post-configure.d/10-iptables >2018-08-02 11:05:28,431 INFO: + set -o pipefail >2018-08-02 11:05:28,431 INFO: + EXTERNAL_BRIDGE=br-ctlplane >2018-08-02 11:05:28,432 INFO: + iptables -w -t nat -C PREROUTING -d 169.254.169.254/32 -i br-ctlplane -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8775 >2018-08-02 11:05:28,439 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 10-iptables completed >2018-08-02 11:05:28,441 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 Running /usr/libexec/os-refresh-config/post-configure.d/80-seedstack-masquerade >2018-08-02 11:05:28,446 INFO: + RULES_SCRIPT=/var/opt/undercloud-stack/masquerade >2018-08-02 11:05:28,446 INFO: + . /var/opt/undercloud-stack/masquerade >2018-08-02 11:05:28,447 INFO: ++ IPTCOMMAND=iptables >2018-08-02 11:05:28,448 INFO: ++ [[ 192.168.24.1 =~ : ]] >2018-08-02 11:05:28,448 INFO: ++ iptables -w -t nat -F BOOTSTACK_MASQ_NEW >2018-08-02 11:05:28,449 INFO: iptables: No chain/target/match by that name. >2018-08-02 11:05:28,450 INFO: ++ true >2018-08-02 11:05:28,450 INFO: ++ iptables -w -t nat -D POSTROUTING -j BOOTSTACK_MASQ_NEW >2018-08-02 11:05:28,452 INFO: iptables v1.4.21: Couldn't load target `BOOTSTACK_MASQ_NEW':No such file or directory >2018-08-02 11:05:28,452 INFO: >2018-08-02 11:05:28,452 INFO: Try `iptables -h' or 'iptables --help' for more information. >2018-08-02 11:05:28,452 INFO: ++ true >2018-08-02 11:05:28,452 INFO: ++ iptables -w -t nat -X BOOTSTACK_MASQ_NEW >2018-08-02 11:05:28,454 INFO: iptables: No chain/target/match by that name. >2018-08-02 11:05:28,454 INFO: ++ true >2018-08-02 11:05:28,455 INFO: ++ iptables -w -t nat -N BOOTSTACK_MASQ_NEW >2018-08-02 11:05:28,456 INFO: ++ NETWORK=192.168.24.0/24 >2018-08-02 11:05:28,456 INFO: ++ NETWORKS=192.168.24.0/24, >2018-08-02 11:05:28,457 INFO: ++ NETWORKS=192.168.24.0/24 >2018-08-02 11:05:28,457 INFO: ++ iptables -w -t nat -A BOOTSTACK_MASQ_NEW -s 192.168.24.0/24 -d 192.168.24.0/24 -j RETURN >2018-08-02 11:05:28,459 INFO: ++ iptables -w -t nat -A BOOTSTACK_MASQ_NEW -s 192.168.24.0/24 -j MASQUERADE >2018-08-02 11:05:28,461 INFO: ++ iptables -w -t nat -I POSTROUTING -j BOOTSTACK_MASQ_NEW >2018-08-02 11:05:28,464 INFO: ++ iptables -w -t nat -F BOOTSTACK_MASQ >2018-08-02 11:05:28,467 INFO: ++ iptables -w -t nat -D POSTROUTING -j BOOTSTACK_MASQ >2018-08-02 11:05:28,469 INFO: ++ iptables -w -t nat -X BOOTSTACK_MASQ >2018-08-02 11:05:28,472 INFO: ++ iptables -w -t nat -E BOOTSTACK_MASQ_NEW BOOTSTACK_MASQ >2018-08-02 11:05:28,474 INFO: ++ iptables -w -D FORWARD -j REJECT --reject-with icmp-host-prohibited >2018-08-02 11:05:28,476 INFO: iptables: No chain/target/match by that name. >2018-08-02 11:05:28,476 INFO: ++ true >2018-08-02 11:05:28,476 INFO: + iptables-save >2018-08-02 11:05:28,484 INFO: + /bin/test -f /etc/sysconfig/iptables >2018-08-02 11:05:28,486 INFO: + /bin/grep -q neutron- /etc/sysconfig/iptables >2018-08-02 11:05:28,488 INFO: + /bin/sed -i /neutron-/d /etc/sysconfig/iptables >2018-08-02 11:05:28,492 INFO: + /bin/test -f /etc/sysconfig/ip6tables >2018-08-02 11:05:28,493 INFO: + /bin/grep -q neutron- /etc/sysconfig/ip6tables >2018-08-02 11:05:28,495 INFO: + /bin/test -f /etc/sysconfig/iptables >2018-08-02 11:05:28,497 INFO: + /bin/grep -v '\-m comment \--comment' /etc/sysconfig/iptables >2018-08-02 11:05:28,497 INFO: + /bin/grep -q ironic-inspector >2018-08-02 11:05:28,499 INFO: + /bin/test -f /etc/sysconfig/ip6tables >2018-08-02 11:05:28,501 INFO: + /bin/grep -v '\-m comment \--comment' /etc/sysconfig/ip6tables >2018-08-02 11:05:28,501 INFO: + /bin/grep -q ironic-inspector >2018-08-02 11:05:28,507 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 80-seedstack-masquerade completed >2018-08-02 11:05:28,509 INFO: dib-run-parts Thu Aug 2 11:05:28 EDT 2018 Running /usr/libexec/os-refresh-config/post-configure.d/98-undercloud-setup >2018-08-02 11:05:28,514 INFO: + source /root/tripleo-undercloud-passwords >2018-08-02 11:05:28,515 INFO: +++ sudo hiera admin_password >2018-08-02 11:05:28,634 INFO: ++ UNDERCLOUD_ADMIN_PASSWORD=dc8c8b27004dc075ba1f17bf7ef3f9b7a0dd3ae8 >2018-08-02 11:05:28,634 INFO: +++ sudo hiera keystone::admin_token >2018-08-02 11:05:28,743 INFO: ++ UNDERCLOUD_ADMIN_TOKEN=be60715ec7dc331f7386dabd70067f8d2d2250c4 >2018-08-02 11:05:28,744 INFO: +++ sudo hiera ceilometer::metering_secret >2018-08-02 11:05:28,856 INFO: ++ UNDERCLOUD_CEILOMETER_METERING_SECRET=16b2e28e676a16f8ba84c9f135d34871d7d24395 >2018-08-02 11:05:28,856 INFO: +++ sudo hiera ceilometer::keystone::authtoken::password >2018-08-02 11:05:28,969 INFO: ++ UNDERCLOUD_CEILOMETER_PASSWORD=3d21174cae7fc654e644abbc12a614ef785381bf >2018-08-02 11:05:28,969 INFO: +++ sudo hiera snmpd_readonly_user_password >2018-08-02 11:05:29,080 INFO: ++ UNDERCLOUD_CEILOMETER_SNMPD_PASSWORD=nil >2018-08-02 11:05:29,081 INFO: +++ sudo hiera snmpd_readonly_user_name >2018-08-02 11:05:29,212 INFO: ++ UNDERCLOUD_CEILOMETER_SNMPD_USER=nil >2018-08-02 11:05:29,212 INFO: +++ sudo hiera admin_password >2018-08-02 11:05:29,344 INFO: ++ UNDERCLOUD_DB_PASSWORD=dc8c8b27004dc075ba1f17bf7ef3f9b7a0dd3ae8 >2018-08-02 11:05:29,344 INFO: +++ sudo hiera glance::api::keystone_password >2018-08-02 11:05:29,466 INFO: ++ UNDERCLOUD_GLANCE_PASSWORD=nil >2018-08-02 11:05:29,466 INFO: +++ sudo hiera tripleo::haproxy::haproxy_stats_password >2018-08-02 11:05:29,581 INFO: ++ UNDERCLOUD_HAPROXY_STATS_PASSWORD=9146ea610a406ed3de6236988e7a1f959cac4e54 >2018-08-02 11:05:29,581 INFO: +++ sudo hiera heat::engine::auth_encryption_key >2018-08-02 11:05:29,691 INFO: ++ UNDERCLOUD_HEAT_ENCRYPTION_KEY=ddad61a1eafb3b101c04eabea5a9e686 >2018-08-02 11:05:29,691 INFO: +++ sudo hiera heat::keystone_password >2018-08-02 11:05:29,825 INFO: ++ UNDERCLOUD_HEAT_PASSWORD=nil >2018-08-02 11:05:29,826 INFO: +++ sudo hiera heat_stack_domain_admin_password >2018-08-02 11:05:29,973 INFO: ++ UNDERCLOUD_HEAT_STACK_DOMAIN_ADMIN_PASSWORD=c2ea0bd33907fd0809bfd94fcd43d240ef0a158f >2018-08-02 11:05:29,974 INFO: +++ sudo hiera horizon_secret_key >2018-08-02 11:05:30,106 INFO: ++ UNDERCLOUD_HORIZON_SECRET_KEY=d52f302ea56b175296702c282272f38a6157b0ef >2018-08-02 11:05:30,107 INFO: +++ sudo hiera ironic::api::authtoken::password >2018-08-02 11:05:30,228 INFO: ++ UNDERCLOUD_IRONIC_PASSWORD=5539402c6c40f2243ae8dc8056d928e9b1e2e73e >2018-08-02 11:05:30,229 INFO: +++ sudo hiera neutron::server::auth_password >2018-08-02 11:05:30,350 INFO: ++ UNDERCLOUD_NEUTRON_PASSWORD=nil >2018-08-02 11:05:30,350 INFO: +++ sudo hiera nova::keystone::authtoken::password >2018-08-02 11:05:30,462 INFO: ++ UNDERCLOUD_NOVA_PASSWORD=d9bb9ae6971a93a291cdf1277d3a01a4fe8b8ba2 >2018-08-02 11:05:30,462 INFO: +++ sudo hiera rabbit_cookie >2018-08-02 11:05:30,575 INFO: ++ UNDERCLOUD_RABBIT_COOKIE=3a63bf4604602d9a8e15dc4cd6f2ec1e1b2c2c5d >2018-08-02 11:05:30,576 INFO: +++ sudo hiera rabbit_password >2018-08-02 11:05:30,684 INFO: ++ UNDERCLOUD_RABBIT_PASSWORD=nil >2018-08-02 11:05:30,685 INFO: +++ sudo hiera rabbit_username >2018-08-02 11:05:30,798 INFO: ++ UNDERCLOUD_RABBIT_USERNAME=nil >2018-08-02 11:05:30,798 INFO: +++ sudo hiera swift::swift_hash_suffix >2018-08-02 11:05:30,924 INFO: ++ UNDERCLOUD_SWIFT_HASH_SUFFIX=nil >2018-08-02 11:05:30,924 INFO: +++ sudo hiera swift::proxy::authtoken::admin_password >2018-08-02 11:05:31,056 INFO: ++ UNDERCLOUD_SWIFT_PASSWORD=nil >2018-08-02 11:05:31,057 INFO: +++ sudo hiera mistral::admin_password >2018-08-02 11:05:31,189 INFO: ++ UNDERCLOUD_MISTRAL_PASSWORD=nil >2018-08-02 11:05:31,189 INFO: +++ sudo hiera zaqar::keystone::authtoken::password >2018-08-02 11:05:31,304 INFO: ++ UNDERCLOUD_ZAQAR_PASSWORD=2a347a71fe3c71beb15ad0f8c7d83b372fc914e1 >2018-08-02 11:05:31,305 INFO: +++ sudo hiera cinder::keystone::authtoken::password >2018-08-02 11:05:31,431 INFO: ++ UNDERCLOUD_CINDER_PASSWORD=6d2738a71f65ef36c159e6e6bd8bf308c75c8240 >2018-08-02 11:05:31,431 INFO: + source /root/stackrc >2018-08-02 11:05:31,432 INFO: +++ set >2018-08-02 11:05:31,432 INFO: +++ awk '{FS="="} /^OS_/ {print $1}' >2018-08-02 11:05:31,435 INFO: ++ NOVA_VERSION=1.1 >2018-08-02 11:05:31,435 INFO: ++ export NOVA_VERSION >2018-08-02 11:05:31,435 INFO: ++ OS_PASSWORD=dc8c8b27004dc075ba1f17bf7ef3f9b7a0dd3ae8 >2018-08-02 11:05:31,435 INFO: ++ export OS_PASSWORD >2018-08-02 11:05:31,436 INFO: ++ OS_AUTH_TYPE=password >2018-08-02 11:05:31,436 INFO: ++ export OS_AUTH_TYPE >2018-08-02 11:05:31,436 INFO: ++ OS_AUTH_URL=https://192.168.24.2:13000/ >2018-08-02 11:05:31,436 INFO: ++ PYTHONWARNINGS='ignore:Certificate has no, ignore:A true SSLContext object is not available' >2018-08-02 11:05:31,436 INFO: ++ export OS_AUTH_URL >2018-08-02 11:05:31,436 INFO: ++ export PYTHONWARNINGS >2018-08-02 11:05:31,437 INFO: ++ OS_USERNAME=admin >2018-08-02 11:05:31,437 INFO: ++ OS_PROJECT_NAME=admin >2018-08-02 11:05:31,437 INFO: ++ COMPUTE_API_VERSION=1.1 >2018-08-02 11:05:31,437 INFO: ++ IRONIC_API_VERSION=1.34 >2018-08-02 11:05:31,437 INFO: ++ OS_BAREMETAL_API_VERSION=1.34 >2018-08-02 11:05:31,437 INFO: ++ OS_NO_CACHE=True >2018-08-02 11:05:31,437 INFO: ++ OS_CLOUDNAME=undercloud >2018-08-02 11:05:31,438 INFO: ++ export OS_USERNAME >2018-08-02 11:05:31,438 INFO: ++ export OS_PROJECT_NAME >2018-08-02 11:05:31,438 INFO: ++ export COMPUTE_API_VERSION >2018-08-02 11:05:31,438 INFO: ++ export IRONIC_API_VERSION >2018-08-02 11:05:31,438 INFO: ++ export OS_BAREMETAL_API_VERSION >2018-08-02 11:05:31,438 INFO: ++ export OS_NO_CACHE >2018-08-02 11:05:31,439 INFO: ++ export OS_CLOUDNAME >2018-08-02 11:05:31,439 INFO: ++ OS_IDENTITY_API_VERSION=3 >2018-08-02 11:05:31,439 INFO: ++ export OS_IDENTITY_API_VERSION >2018-08-02 11:05:31,439 INFO: ++ OS_PROJECT_DOMAIN_NAME=Default >2018-08-02 11:05:31,439 INFO: ++ export OS_PROJECT_DOMAIN_NAME >2018-08-02 11:05:31,439 INFO: ++ OS_USER_DOMAIN_NAME=Default >2018-08-02 11:05:31,439 INFO: ++ export OS_USER_DOMAIN_NAME >2018-08-02 11:05:31,440 INFO: ++ '[' -z '' ']' >2018-08-02 11:05:31,440 INFO: ++ export PS1= >2018-08-02 11:05:31,440 INFO: ++ PS1= >2018-08-02 11:05:31,440 INFO: ++ export 'PS1=${OS_CLOUDNAME:+($OS_CLOUDNAME)} ' >2018-08-02 11:05:31,440 INFO: ++ PS1='${OS_CLOUDNAME:+($OS_CLOUDNAME)} ' >2018-08-02 11:05:31,440 INFO: ++ export CLOUDPROMPT_ENABLED=1 >2018-08-02 11:05:31,441 INFO: ++ CLOUDPROMPT_ENABLED=1 >2018-08-02 11:05:31,441 INFO: + INSTACK_ROOT= >2018-08-02 11:05:31,441 INFO: + export INSTACK_ROOT >2018-08-02 11:05:31,441 INFO: + '[' -n '' ']' >2018-08-02 11:05:31,441 INFO: + '[' '!' -f /root/.ssh/authorized_keys ']' >2018-08-02 11:05:31,441 INFO: + '[' '!' -f /root/.ssh/id_rsa ']' >2018-08-02 11:05:31,441 INFO: + cat /root/.ssh/id_rsa.pub >2018-08-02 11:05:31,444 INFO: + '[' -e /usr/sbin/getenforce ']' >2018-08-02 11:05:31,444 INFO: ++ getenforce >2018-08-02 11:05:31,446 INFO: + '[' Enforcing == Enforcing ']' >2018-08-02 11:05:31,447 INFO: + set +e >2018-08-02 11:05:31,448 INFO: ++ find /root/.ssh/ -exec ls -lZ '{}' ';' >2018-08-02 11:05:31,448 INFO: ++ grep -v ssh_home_t >2018-08-02 11:05:31,463 INFO: + selinux_wrong_permission= >2018-08-02 11:05:31,463 INFO: + set -e >2018-08-02 11:05:31,463 INFO: + '[' -n '' ']' >2018-08-02 11:05:31,464 INFO: ++ openstack project show admin >2018-08-02 11:05:31,464 INFO: ++ awk '$2=="id" {print $4}' >2018-08-02 11:05:34,415 INFO: + openstack quota set --cores -1 --instances -1 --ram -1 aed387cf82184fb788209f67beef84fe >2018-08-02 11:05:38,621 INFO: + rm -rf /root/.novaclient >2018-08-02 11:05:38,630 INFO: dib-run-parts Thu Aug 2 11:05:38 EDT 2018 98-undercloud-setup completed >2018-08-02 11:05:38,632 INFO: dib-run-parts Thu Aug 2 11:05:38 EDT 2018 Running /usr/libexec/os-refresh-config/post-configure.d/99-refresh-completed >2018-08-02 11:05:38,638 INFO: ++ os-apply-config --key completion-handle --type raw --key-default '' >2018-08-02 11:05:38,855 INFO: [2018/08/02 11:05:38 AM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json >2018-08-02 11:05:38,866 INFO: + HANDLE= >2018-08-02 11:05:38,866 INFO: ++ os-apply-config --key completion-signal --type raw --key-default '' >2018-08-02 11:05:39,078 INFO: [2018/08/02 11:05:39 AM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json >2018-08-02 11:05:39,088 INFO: + SIGNAL= >2018-08-02 11:05:39,088 INFO: ++ os-apply-config --key instance-id --type raw --key-default '' >2018-08-02 11:05:39,303 INFO: [2018/08/02 11:05:39 AM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json >2018-08-02 11:05:39,313 INFO: + ID= >2018-08-02 11:05:39,313 INFO: + '[' -n '' ']' >2018-08-02 11:05:39,313 INFO: + exit 0 >2018-08-02 11:05:39,317 INFO: dib-run-parts Thu Aug 2 11:05:39 EDT 2018 99-refresh-completed completed >2018-08-02 11:05:39,319 INFO: dib-run-parts Thu Aug 2 11:05:39 EDT 2018 ----------------------- PROFILING ----------------------- >2018-08-02 11:05:39,321 INFO: dib-run-parts Thu Aug 2 11:05:39 EDT 2018 >2018-08-02 11:05:39,325 INFO: dib-run-parts Thu Aug 2 11:05:39 EDT 2018 Target: post-configure.d >2018-08-02 11:05:39,327 INFO: dib-run-parts Thu Aug 2 11:05:39 EDT 2018 >2018-08-02 11:05:39,330 INFO: dib-run-parts Thu Aug 2 11:05:39 EDT 2018 Script Seconds >2018-08-02 11:05:39,331 INFO: dib-run-parts Thu Aug 2 11:05:39 EDT 2018 --------------------------------------- ---------- >2018-08-02 11:05:39,333 INFO: dib-run-parts Thu Aug 2 11:05:39 EDT 2018 >2018-08-02 11:05:39,348 INFO: dib-run-parts Thu Aug 2 11:05:39 EDT 2018 10-iptables 0.008 >2018-08-02 11:05:39,357 INFO: dib-run-parts Thu Aug 2 11:05:39 EDT 2018 80-seedstack-masquerade 0.062 >2018-08-02 11:05:39,368 INFO: dib-run-parts Thu Aug 2 11:05:39 EDT 2018 98-undercloud-setup 10.117 >2018-08-02 11:05:39,378 INFO: dib-run-parts Thu Aug 2 11:05:39 EDT 2018 99-refresh-completed 0.681 >2018-08-02 11:05:39,381 INFO: dib-run-parts Thu Aug 2 11:05:39 EDT 2018 >2018-08-02 11:05:39,386 INFO: dib-run-parts Thu Aug 2 11:05:39 EDT 2018 --------------------- END PROFILING --------------------- >2018-08-02 11:05:39,386 INFO: [2018-08-02 11:05:39,383] (os-refresh-config) [INFO] Completed phase post-configure >2018-08-02 11:05:39,397 INFO: os-refresh-config completed successfully >/usr/lib/python2.7/site-packages/requests/packages/urllib3/connection.py:344: SubjectAltNameWarning: Certificate for 192.168.24.2 has no `subjectAltName`, falling back to check for a `commonName` for now. This feature is being removed by major browsers and deprecated by RFC 2818. (See https://github.com/shazow/urllib3/issues/497 for details.) > SubjectAltNameWarning >2018-08-02 11:05:42,506 INFO: Not creating ctlplane network, because it already exists. >2018-08-02 11:05:42,578 WARNING: Local subnet ctlplane-subnet already exists and is not associated with a network segment. Any additional subnets will be ignored. >2018-08-02 11:05:43,223 INFO: Subnet updated openstack.network.v2.subnet.Subnet(service_types=[], description=, enable_dhcp=True, tags=[], network_id=26b3f8b3-fafe-40b6-a0f2-f59c0a465179, tenant_id=aed387cf82184fb788209f67beef84fe, created_at=2018-07-25T09:51:15Z, segment_id=None, dns_nameservers=[], updated_at=2018-08-02T15:05:42Z, gateway_ip=192.168.24.1, ipv6_ra_mode=None, allocation_pools=[{u'start': u'192.168.24.5', u'end': u'192.168.24.24'}], host_routes=[{u'nexthop': u'192.168.24.1', u'destination': u'169.254.169.254/32'}], revision_number=1, ip_version=4, ipv6_address_mode=None, cidr=192.168.24.0/24, id=df913035-cd84-4c7c-9942-c75c995150c3, subnetpool_id=None, name=ctlplane-subnet) >2018-08-02 11:05:44,927 WARNING: Node d1eb5adc-fa62-49b5-9298-7e56f62f7958 is using a resource class compute instead of the default baremetal. Make sure you use the correct flavor for it. >2018-08-02 11:05:44,927 WARNING: Node eac4db01-7670-49fe-aa9f-6b760d68befb is using a resource class compute instead of the default baremetal. Make sure you use the correct flavor for it. >2018-08-02 11:05:44,928 WARNING: Node 49535784-b7ba-4fee-aa68-8dcf4426924d is using a resource class controller instead of the default baremetal. Make sure you use the correct flavor for it. >2018-08-02 11:05:44,928 WARNING: Node 80ba5268-ced8-4480-8bf7-2b4c9bca1cb6 is using a resource class controller instead of the default baremetal. Make sure you use the correct flavor for it. >2018-08-02 11:05:44,928 WARNING: Node 98d81509-7627-4e96-bc12-0bfbc7cd9945 is using a resource class controller instead of the default baremetal. Make sure you use the correct flavor for it. >2018-08-02 11:05:44,975 INFO: Not creating flavor "baremetal" because it already exists. >2018-08-02 11:05:45,064 INFO: Flavor baremetal updated to use custom resource class baremetal >2018-08-02 11:05:45,203 INFO: Created flavor "control" with profile "control" >2018-08-02 11:05:45,203 INFO: Not creating flavor "compute" because it already exists. >2018-08-02 11:05:45,234 WARNING: Not updating flavor compute, as it already has a custom resource class resources:CUSTOM_COMPUTE. Make sure you have enough nodes with this resource class. >2018-08-02 11:05:45,368 INFO: Created flavor "ceph-storage" with profile "ceph-storage" >2018-08-02 11:05:45,469 INFO: Created flavor "block-storage" with profile "block-storage" >2018-08-02 11:05:45,563 INFO: Created flavor "swift-storage" with profile "swift-storage" >2018-08-02 11:05:45,564 INFO: Configuring Mistral workbooks >2018-08-02 11:06:10,033 INFO: Mistral workbooks configured successfully >2018-08-02 11:06:11,009 INFO: Not creating default plan "overcloud" because it already exists. >2018-08-02 11:06:11,009 INFO: Configuring an hourly cron trigger for tripleo-ui logging >2018-08-02 11:06:14,943 INFO: Migrating stack "3154f52a-8396-4d27-b422-a35016f5f3ca" to convergence engine >2018-08-02 11:06:16,704 INFO: Finished migrating stack "3154f52a-8396-4d27-b422-a35016f5f3ca" >2018-08-02 11:06:16,882 INFO: Starting and waiting for validation groups ['post-upgrade'] >2018-08-02 11:06:43,348 INFO: >############################################################################# >Undercloud upgrade complete. > >The file containing this installation's passwords is at >/home/stack/undercloud-passwords.conf. > >There is also a stackrc file at /home/stack/stackrc. > >These files are needed to interact with the OpenStack services, and should be >secured. > >############################################################################# > >updated undercloud >./minor_update.sh: line 7: ./prepare_container.sh: Permission denied >preapred images > [WARNING]: Consider using yum module rather than running yum > >192.168.24.12 | SUCCESS | rc=0 >> >Loaded plugins: product-id, search-disabled-repos, subscription-manager >This system is not registered with an entitlement server. You can use subscription-manager to register. >Examining /var/tmp/yum-root-w9snY0/rhos-release-latest.noarch.rpm: rhos-release-1.2.44-1.noarch >Marking /var/tmp/yum-root-w9snY0/rhos-release-latest.noarch.rpm to be installed >Resolving Dependencies >--> Running transaction check >---> Package rhos-release.noarch 0:1.2.44-1 will be installed >--> Finished Dependency Resolution > >Dependencies Resolved > >================================================================================ > Package Arch Version Repository Size >================================================================================ >Installing: > rhos-release noarch 1.2.44-1 /rhos-release-latest.noarch 108 k > >Transaction Summary >================================================================================ >Install 1 Package > >Total size: 108 k >Installed size: 108 k >Downloading packages: >Running transaction check >Running transaction test >Transaction test succeeded >Running transaction > Installing : rhos-release-1.2.44-1.noarch 1/1 > Verifying : rhos-release-1.2.44-1.noarch 1/1 > >Installed: > rhos-release.noarch 0:1.2.44-1 > >Complete! >Installed: /etc/yum.repos.d/rhos-release-rhel-7.5.repo >Installing wget... >Installed: /etc/yum.repos.d/rhos-release-ceph-3.repo >Installed: /etc/yum.repos.d/rhos-release-ceph-osd-3.repo >Installed: /etc/yum.repos.d/rhos-release-13.repo ># rhos-release 13 -p 2018-07-30.2 >Installed: /etc/yum.repos.d/rhos-release-13.repo > >192.168.24.9 | SUCCESS | rc=0 >> >Loaded plugins: product-id, search-disabled-repos, subscription-manager >This system is not registered with an entitlement server. You can use subscription-manager to register. >Examining /var/tmp/yum-root-68p7__/rhos-release-latest.noarch.rpm: rhos-release-1.2.44-1.noarch >Marking /var/tmp/yum-root-68p7__/rhos-release-latest.noarch.rpm to be installed >Resolving Dependencies >--> Running transaction check >---> Package rhos-release.noarch 0:1.2.44-1 will be installed >--> Finished Dependency Resolution > >Dependencies Resolved > >================================================================================ > Package Arch Version Repository Size >================================================================================ >Installing: > rhos-release noarch 1.2.44-1 /rhos-release-latest.noarch 108 k > >Transaction Summary >================================================================================ >Install 1 Package > >Total size: 108 k >Installed size: 108 k >Downloading packages: >Running transaction check >Running transaction test >Transaction test succeeded >Running transaction > Installing : rhos-release-1.2.44-1.noarch 1/1 > Verifying : rhos-release-1.2.44-1.noarch 1/1 > >Installed: > rhos-release.noarch 0:1.2.44-1 > >Complete! >Installed: /etc/yum.repos.d/rhos-release-rhel-7.5.repo >Installing wget... >Installed: /etc/yum.repos.d/rhos-release-ceph-3.repo >Installed: /etc/yum.repos.d/rhos-release-ceph-osd-3.repo >Installed: /etc/yum.repos.d/rhos-release-13.repo ># rhos-release 13 -p 2018-07-30.2 >Installed: /etc/yum.repos.d/rhos-release-13.repo > >192.168.24.17 | SUCCESS | rc=0 >> >Loaded plugins: product-id, search-disabled-repos, subscription-manager >This system is not registered with an entitlement server. You can use subscription-manager to register. >Examining /var/tmp/yum-root-fPGp2k/rhos-release-latest.noarch.rpm: rhos-release-1.2.44-1.noarch >Marking /var/tmp/yum-root-fPGp2k/rhos-release-latest.noarch.rpm to be installed >Resolving Dependencies >--> Running transaction check >---> Package rhos-release.noarch 0:1.2.44-1 will be installed >--> Finished Dependency Resolution > >Dependencies Resolved > >================================================================================ > Package Arch Version Repository Size >================================================================================ >Installing: > rhos-release noarch 1.2.44-1 /rhos-release-latest.noarch 108 k > >Transaction Summary >================================================================================ >Install 1 Package > >Total size: 108 k >Installed size: 108 k >Downloading packages: >Running transaction check >Running transaction test >Transaction test succeeded >Running transaction > Installing : rhos-release-1.2.44-1.noarch 1/1 > Verifying : rhos-release-1.2.44-1.noarch 1/1 > >Installed: > rhos-release.noarch 0:1.2.44-1 > >Complete! >Installed: /etc/yum.repos.d/rhos-release-rhel-7.5.repo >Installing wget... >Installed: /etc/yum.repos.d/rhos-release-ceph-3.repo >Installed: /etc/yum.repos.d/rhos-release-ceph-osd-3.repo >Installed: /etc/yum.repos.d/rhos-release-13.repo ># rhos-release 13 -p 2018-07-30.2 >Installed: /etc/yum.repos.d/rhos-release-13.repo > >192.168.24.11 | SUCCESS | rc=0 >> >Loaded plugins: product-id, search-disabled-repos, subscription-manager >This system is not registered with an entitlement server. You can use subscription-manager to register. >Examining /var/tmp/yum-root-oLm3Ua/rhos-release-latest.noarch.rpm: rhos-release-1.2.44-1.noarch >Marking /var/tmp/yum-root-oLm3Ua/rhos-release-latest.noarch.rpm to be installed >Resolving Dependencies >--> Running transaction check >---> Package rhos-release.noarch 0:1.2.44-1 will be installed >--> Finished Dependency Resolution > >Dependencies Resolved > >================================================================================ > Package Arch Version Repository Size >================================================================================ >Installing: > rhos-release noarch 1.2.44-1 /rhos-release-latest.noarch 108 k > >Transaction Summary >================================================================================ >Install 1 Package > >Total size: 108 k >Installed size: 108 k >Downloading packages: >Running transaction check >Running transaction test >Transaction test succeeded >Running transaction > Installing : rhos-release-1.2.44-1.noarch 1/1 > Verifying : rhos-release-1.2.44-1.noarch 1/1 > >Installed: > rhos-release.noarch 0:1.2.44-1 > >Complete! >Installed: /etc/yum.repos.d/rhos-release-rhel-7.5.repo >Installing wget... >Installed: /etc/yum.repos.d/rhos-release-ceph-3.repo >Installed: /etc/yum.repos.d/rhos-release-ceph-osd-3.repo >Installed: /etc/yum.repos.d/rhos-release-13.repo ># rhos-release 13 -p 2018-07-30.2 >Installed: /etc/yum.repos.d/rhos-release-13.repo > >192.168.24.8 | SUCCESS | rc=0 >> >Loaded plugins: product-id, search-disabled-repos, subscription-manager >This system is not registered with an entitlement server. You can use subscription-manager to register. >Examining /var/tmp/yum-root-wQh3q5/rhos-release-latest.noarch.rpm: rhos-release-1.2.44-1.noarch >Marking /var/tmp/yum-root-wQh3q5/rhos-release-latest.noarch.rpm to be installed >Resolving Dependencies >--> Running transaction check >---> Package rhos-release.noarch 0:1.2.44-1 will be installed >--> Finished Dependency Resolution > >Dependencies Resolved > >================================================================================ > Package Arch Version Repository Size >================================================================================ >Installing: > rhos-release noarch 1.2.44-1 /rhos-release-latest.noarch 108 k > >Transaction Summary >================================================================================ >Install 1 Package > >Total size: 108 k >Installed size: 108 k >Downloading packages: >Running transaction check >Running transaction test >Transaction test succeeded >Running transaction > Installing : rhos-release-1.2.44-1.noarch 1/1 > Verifying : rhos-release-1.2.44-1.noarch 1/1 > >Installed: > rhos-release.noarch 0:1.2.44-1 > >Complete! >Installed: /etc/yum.repos.d/rhos-release-rhel-7.5.repo >Installing wget... >Installed: /etc/yum.repos.d/rhos-release-ceph-3.repo >Installed: /etc/yum.repos.d/rhos-release-ceph-osd-3.repo >Installed: /etc/yum.repos.d/rhos-release-13.repo ># rhos-release 13 -p 2018-07-30.2 >Installed: /etc/yum.repos.d/rhos-release-13.repo > >done updating overcloud repos > [WARNING]: Consider using yum module rather than running yum > >192.168.24.11 | SUCCESS | rc=0 >> >Loaded plugins: product-id, search-disabled-repos, subscription-manager >This system is not registered with an entitlement server. You can use subscription-manager to register. >Examining /var/tmp/yum-root-oLm3Ua/rhos-release-latest.noarch.rpm: rhos-release-1.2.44-1.noarch >/var/tmp/yum-root-oLm3Ua/rhos-release-latest.noarch.rpm: does not update installed package. >Nothing to do > >192.168.24.12 | SUCCESS | rc=0 >> >Loaded plugins: product-id, search-disabled-repos, subscription-manager >This system is not registered with an entitlement server. You can use subscription-manager to register. >Examining /var/tmp/yum-root-w9snY0/rhos-release-latest.noarch.rpm: rhos-release-1.2.44-1.noarch >/var/tmp/yum-root-w9snY0/rhos-release-latest.noarch.rpm: does not update installed package. >Nothing to do > >192.168.24.17 | SUCCESS | rc=0 >> >Loaded plugins: product-id, search-disabled-repos, subscription-manager >This system is not registered with an entitlement server. You can use subscription-manager to register. >Examining /var/tmp/yum-root-fPGp2k/rhos-release-latest.noarch.rpm: rhos-release-1.2.44-1.noarch >/var/tmp/yum-root-fPGp2k/rhos-release-latest.noarch.rpm: does not update installed package. >Nothing to do > >192.168.24.8 | SUCCESS | rc=0 >> >Loaded plugins: product-id, search-disabled-repos, subscription-manager >This system is not registered with an entitlement server. You can use subscription-manager to register. >Examining /var/tmp/yum-root-wQh3q5/rhos-release-latest.noarch.rpm: rhos-release-1.2.44-1.noarch >/var/tmp/yum-root-wQh3q5/rhos-release-latest.noarch.rpm: does not update installed package. >Nothing to do > >192.168.24.9 | SUCCESS | rc=0 >> >Loaded plugins: product-id, search-disabled-repos, subscription-manager >This system is not registered with an entitlement server. You can use subscription-manager to register. >Examining /var/tmp/yum-root-68p7__/rhos-release-latest.noarch.rpm: rhos-release-1.2.44-1.noarch >/var/tmp/yum-root-68p7__/rhos-release-latest.noarch.rpm: does not update installed package. >Nothing to do > >updating overcloud. Enter the level of update - 1 or 2. >2 >doing level 2 update >[Errno 2] No such file or directory: u'/home/stack/docker_registry.yaml' >Waiting for messages on queue 'update' with no timeout. >Update failed with: {u'status': u'FAILED', u'execution': {u'name': u'tripleo.package_update.v1.update_nodes', u'created_at': u'2018-08-02 15:32:59', u'updated_at': u'2018-08-02 15:32:59', u'spec': {u'tasks': {u'node_update': {u'name': u'node_update', u'on-error': u'node_update_failed', u'on-success': [{u'node_update_passed': u'<% task().result.returncode = 0 %>'}, {u'node_update_failed': u'<% task().result.returncode != 0 %>'}], u'publish': {u'output': u'<% task().result %>'}, u'version': u'2.0', u'action': u'tripleo.ansible-playbook', u'input': {u'remote_user': u'<% $.node_user %>', u'become_user': u'root', u'ssh_private_key': u'<% $.private_key %>', u'verbosity': u'<% $.verbosity %>', u'queue_name': u'<% $.ansible_queue_name %>', u'extra_env_variables': u'<% $.ansible_extra_env_variables %>', u'skip_tags': u'<% $.skip_tags %>', u'inventory': u'<% $.inventory_file %>', u'execution_id': u'<% execution().id %>', u'module_path': u'<% $.module_path %>', u'become': True, u'trash_output': True, u'limit_hosts': u'<% $.nodes %>', u'playbook': u'<% $.work_dir %>/<% execution().id %>/<% $.playbook %>'}, u'type': u'direct'}, u'get_private_key': {u'name': u'get_private_key', u'on-success': u'node_update', u'publish': {u'private_key': u'<% task().result %>'}, u'version': u'2.0', u'action': u'tripleo.validations.get_privkey', u'type': u'direct'}, u'node_update_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'node_update_failed', u'publish': {u'status': u'FAILED', u'message': u'Failed to update nodes - <% $.nodes %>, please see the logs.'}, u'on-success': u'notify_zaqar'}, u'node_update_passed': {u'version': u'2.0', u'type': u'direct', u'name': u'node_update_passed', u'publish': {u'status': u'SUCCESS', u'message': u'Updated nodes - <% $.nodes %>'}, u'on-success': u'notify_zaqar'}, u'notify_zaqar': {u'retry': u'count=5 delay=1', u'name': u'notify_zaqar', u'on-success': [{u'fail': u'<% $.get(\'status\') = "FAILED" %>'}], u'version': u'2.0', u'action': u'zaqar.queue_post', u'input': {u'queue_name': u'<% $.ansible_queue_name %>', u'messages': {u'body': {u'type': u'tripleo.package_update.v1.update_nodes', u'payload': {u'status': u'<% $.status %>', u'execution': u'<% execution() %>'}}}}, u'type': u'direct'}, u'download_config': {u'name': u'download_config', u'on-error': u'node_update_failed', u'on-success': u'get_private_key', u'version': u'2.0', u'action': u'tripleo.config.download_config', u'input': {u'work_dir': u'<% $.work_dir %>/<% execution().id %>'}, u'type': u'direct'}}, u'name': u'update_nodes', u'tags': [u'tripleo-common-managed'], u'version': u'2.0', u'input': [{u'node_user': u'heat-admin'}, u'nodes', u'playbook', u'inventory_file', {u'ansible_queue_name': u'tripleo'}, {u'module_path': u'/usr/share/ansible-modules'}, {u'ansible_extra_env_variables': {u'ANSIBLE_HOST_KEY_CHECKING': u'False', u'ANSIBLE_LOG_PATH': u'/var/log/mistral/package_update.log'}}, {u'verbosity': 1}, {u'work_dir': u'/var/lib/mistral'}, {u'skip_tags': u''}], u'description': u'Take a container and perform an update nodes by nodes'}, u'params': {u'namespace': u'', u'env': {}}, u'input': {u'inventory_file': u'undercloud:\n hosts:\n localhost: {}\n vars:\n ansible_connection: local\n ansible_remote_tmp: /tmp/ansible-${USER}\n auth_url: https://192.168.24.2:13000/\n cacert: null\n os_auth_token: gAAAAABbYyQk61au59pI6nCMD38Os4W8G1A1sY5VLfr5p-L0MEuSVuKJMyk1CCrAGoylLPKuUNqazp0jx2SHButu_BAyRDhxIHFFhOSKTeYo_DXwt5kVrwZDcvMPMIL6Af2yOGlZkoN8pg5cGx8cCYGAMOYxApbF3mTyHuJDetPteWtmCeABank\n overcloud_admin_password: XxK3Mh947xh2TVyaJJWb7myna\n overcloud_horizon_url: http://10.0.0.106:80/dashboard\n overcloud_keystone_url: http://10.0.0.106:5000/\n plan: overcloud\n project_name: admin\n undercloud_service_list: [openstack-nova-compute, openstack-heat-engine, openstack-ironic-conductor,\n openstack-swift-container, openstack-swift-object, openstack-mistral-engine]\n undercloud_swift_url: https://192.168.24.2:13808/v1/AUTH_aed387cf82184fb788209f67beef84fe\n username: admin\ncontroller-0:\n hosts:\n 192.168.24.9: {}\n vars:\n ctlplane_ip: 192.168.24.9\n deploy_server_id: 0d25b3fa-5154-47be-9ced-05bdd8d3ca43\n enabled_networks: [management, storage, ctlplane, external, internal_api, storage_mgmt,\n tenant]\n external_ip: 10.0.0.109\n internal_api_ip: 172.17.1.14\n management_ip: 192.168.24.9\n storage_ip: 172.17.3.21\n storage_mgmt_ip: 172.17.4.10\n tenant_ip: 172.17.2.18\ncontroller-1:\n hosts:\n 192.168.24.8: {}\n vars:\n ctlplane_ip: 192.168.24.8\n deploy_server_id: 629a806d-c1d7-41c2-aafb-90a857fb3598\n enabled_networks: [management, storage, ctlplane, external, internal_api, storage_mgmt,\n tenant]\n external_ip: 10.0.0.108\n internal_api_ip: 172.17.1.11\n management_ip: 192.168.24.8\n storage_ip: 172.17.3.20\n storage_mgmt_ip: 172.17.4.16\n tenant_ip: 172.17.2.15\ncontroller-2:\n hosts:\n 192.168.24.11: {}\n vars:\n ctlplane_ip: 192.168.24.11\n deploy_server_id: 9ed2aa47-631c-4e3e-b4d6-29ba5af7602a\n enabled_networks: [management, storage, ctlplane, external, internal_api, storage_mgmt,\n tenant]\n external_ip: 10.0.0.103\n internal_api_ip: 172.17.1.16\n management_ip: 192.168.24.11\n storage_ip: 172.17.3.19\n storage_mgmt_ip: 172.17.4.20\n tenant_ip: 172.17.2.13\nController:\n children:\n controller-0: {}\n controller-1: {}\n controller-2: {}\n vars:\n ansible_ssh_user: heat-admin\n bootstrap_server_id: 0d25b3fa-5154-47be-9ced-05bdd8d3ca43\n role_data_cellv2_discovery: false\n role_data_config_settings: {}\n role_data_deploy_steps_tasks: []\n role_data_docker_config:\n step_1:\n cinder_volume_image_tag:\n command: [/bin/bash, -c, \'/usr/bin/docker tag \'\'192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-07-13.1\'\'\n \'\'192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest\'\'\']\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/dev/shm:/dev/shm:rw\', \'/etc/sysconfig/docker:/etc/sysconfig/docker:ro\',\n \'/usr/bin:/usr/bin:ro\', \'/var/run/docker.sock:/var/run/docker.sock:rw\']\n haproxy_image_tag:\n command: [/bin/bash, -c, \'/usr/bin/docker tag \'\'192.168.24.1:8787/rhosp13/openstack-haproxy:2018-07-13.1\'\'\n \'\'192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest\'\'\']\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-haproxy:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/dev/shm:/dev/shm:rw\', \'/etc/sysconfig/docker:/etc/sysconfig/docker:ro\',\n \'/usr/bin:/usr/bin:ro\', \'/var/run/docker.sock:/var/run/docker.sock:rw\']\n memcached:\n command: [/bin/bash, -c, \'source /etc/sysconfig/memcached; /usr/bin/memcached\n -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS\']\n image: 192.168.24.1:8787/rhosp13/openstack-memcached:2018-07-13.1\n net: host\n privileged: false\n restart: always\n start_order: 0\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro\']\n mysql_bootstrap:\n command: [bash, -ec, \'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\n\n echo -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\n\n kolla_set_configs\n\n sudo -u mysql -E kolla_extend_start\n\n mysqld_safe --skip-networking --wsrep-on=OFF &\n\n timeout ${DB_MAX_TIMEOUT} /bin/bash -c \'\'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}"\n ping 2>/dev/null; do sleep 1; done\'\'\n\n mysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'\'clustercheck\'\'@\'\'localhost\'\'\n IDENTIFIED BY \'\'${DB_CLUSTERCHECK_PASSWORD}\'\';"\n\n mysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'\'clustercheck\'\'@\'\'localhost\'\'\n WITH GRANT OPTION;"\n\n timeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}"\n shutdown\']\n detach: false\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS, KOLLA_BOOTSTRAP=True, DB_MAX_TIMEOUT=60,\n DB_CLUSTERCHECK_PASSWORD=Y842JReAdAaXZwRHfsjTtdqgg, DB_ROOT_PASSWORD=7xm4XA2YHK]\n image: 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json\',\n \'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro\',\n \'/var/lib/mysql:/var/lib/mysql\']\n mysql_data_ownership:\n command: [chown, -R, \'mysql:\', /var/lib/mysql]\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\n net: host\n start_order: 0\n user: root\n volumes: [\'/var/lib/mysql:/var/lib/mysql\']\n mysql_image_tag:\n command: [/bin/bash, -c, \'/usr/bin/docker tag \'\'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\'\'\n \'\'192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest\'\'\']\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\n net: host\n start_order: 2\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/dev/shm:/dev/shm:rw\', \'/etc/sysconfig/docker:/etc/sysconfig/docker:ro\',\n \'/usr/bin:/usr/bin:ro\', \'/var/run/docker.sock:/var/run/docker.sock:rw\']\n opendaylight_api:\n detach: true\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-opendaylight:2018-07-13.1\n net: host\n privileged: false\n restart: unless-stopped\n start_order: 0\n user: odl\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/opendaylight_api.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/opendaylight/:/var/lib/kolla/config_files/src:ro\',\n \'/var/lib/opendaylight/journal:/opt/opendaylight/journal\', \'/var/lib/opendaylight/snapshots:/opt/opendaylight/snapshots\',\n \'/var/lib/opendaylight/data:/opt/opendaylight/data\', \'/var/log/containers/opendaylight:/opt/opendaylight/data/log\']\n rabbitmq_bootstrap:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS, KOLLA_BOOTSTRAP=True, RABBITMQ_CLUSTER_COOKIE=wMGzfECCXTCuVVgpTMBH]\n image: 192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-07-13.1\n net: host\n privileged: false\n start_order: 0\n volumes: [\'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro\',\n \'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\', \'/var/lib/rabbitmq:/var/lib/rabbitmq\']\n rabbitmq_image_tag:\n command: [/bin/bash, -c, \'/usr/bin/docker tag \'\'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-07-13.1\'\'\n \'\'192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest\'\'\']\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/dev/shm:/dev/shm:rw\', \'/etc/sysconfig/docker:/etc/sysconfig/docker:ro\',\n \'/usr/bin:/usr/bin:ro\', \'/var/run/docker.sock:/var/run/docker.sock:rw\']\n redis_image_tag:\n command: [/bin/bash, -c, \'/usr/bin/docker tag \'\'192.168.24.1:8787/rhosp13/openstack-redis:2018-07-13.1\'\'\n \'\'192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest\'\'\']\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-redis:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/dev/shm:/dev/shm:rw\', \'/etc/sysconfig/docker:/etc/sysconfig/docker:ro\',\n \'/usr/bin:/usr/bin:ro\', \'/var/run/docker.sock:/var/run/docker.sock:rw\']\n step_2:\n aodh_init_log:\n command: [/bin/bash, -c, \'chown -R aodh:aodh /var/log/aodh\']\n image: 192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-07-13.1\n user: root\n volumes: [\'/var/log/containers/aodh:/var/log/aodh\', \'/var/log/containers/httpd/aodh-api:/var/log/httpd\']\n cinder_api_init_logs:\n command: [/bin/bash, -c, \'chown -R cinder:cinder /var/log/cinder\']\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-07-13.1\n privileged: false\n user: root\n volumes: [\'/var/log/containers/cinder:/var/log/cinder\', \'/var/log/containers/httpd/cinder-api:/var/log/httpd\']\n cinder_scheduler_init_logs:\n command: [/bin/bash, -c, \'chown -R cinder:cinder /var/log/cinder\']\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-scheduler:2018-07-13.1\n privileged: false\n user: root\n volumes: [\'/var/log/containers/cinder:/var/log/cinder\']\n clustercheck:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\n net: host\n restart: always\n start_order: 1\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json\',\n \'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro\',\n \'/var/lib/mysql:/var/lib/mysql\']\n create_dnsmasq_wrapper:\n command: [/docker_puppet_apply.sh, \'4\', file, \'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers\']\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-neutron-dhcp-agent:2018-07-13.1\n net: host\n pid: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro\',\n \'/etc/puppet:/tmp/puppet-etc:ro\', \'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro\',\n \'/run/openvswitch:/run/openvswitch\', \'/var/lib/neutron:/var/lib/neutron\']\n glance_init_logs:\n command: [/bin/bash, -c, \'chown -R glance:glance /var/log/glance\']\n image: 192.168.24.1:8787/rhosp13/openstack-glance-api:2018-07-13.1\n privileged: false\n user: root\n volumes: [\'/var/log/containers/glance:/var/log/glance\']\n gnocchi_init_lib:\n command: [/bin/bash, -c, \'chown -R gnocchi:gnocchi /var/lib/gnocchi\']\n image: 192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-07-13.1\n user: root\n volumes: [\'/var/lib/gnocchi:/var/lib/gnocchi:rw\']\n gnocchi_init_log:\n command: [/bin/bash, -c, \'chown -R gnocchi:gnocchi /var/log/gnocchi\']\n image: 192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-07-13.1\n user: root\n volumes: [\'/var/log/containers/gnocchi:/var/log/gnocchi\', \'/var/log/containers/httpd/gnocchi-api:/var/log/httpd\']\n haproxy_init_bundle:\n command: [/docker_puppet_apply.sh, \'2\', \'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation\',\n \'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle\',\n --debug]\n detach: false\n environment: [TRIPLEO_DEPLOY_IDENTIFIER=1532514654]\n image: 192.168.24.1:8787/rhosp13/openstack-haproxy:2018-07-13.1\n net: host\n privileged: true\n start_order: 3\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro\',\n \'/etc/puppet:/tmp/puppet-etc:ro\', \'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro\',\n \'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro\', \'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro\',\n \'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro\', \'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro\',\n \'/etc/sysconfig:/etc/sysconfig:rw\', \'/usr/libexec/iptables:/usr/libexec/iptables:ro\',\n \'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\']\n haproxy_restart_bundle:\n command: [/usr/bin/bootstrap_host_exec, haproxy, if /usr/sbin/pcs resource\n show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600\n haproxy-bundle; echo "haproxy-bundle restart invoked"; fi]\n config_volume: haproxy\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-haproxy:2018-07-13.1\n net: host\n start_order: 2\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\',\n \'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro\']\n heat_init_log:\n command: [/bin/bash, -c, \'chown -R heat:heat /var/log/heat\']\n image: 192.168.24.1:8787/rhosp13/openstack-heat-engine:2018-07-13.1\n user: root\n volumes: [\'/var/log/containers/heat:/var/log/heat\']\n horizon_fix_perms:\n command: [/bin/bash, -c, \'touch /var/log/horizon/horizon.log && chown -R\n apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard\']\n image: 192.168.24.1:8787/rhosp13/openstack-horizon:2018-07-13.1\n user: root\n volumes: [\'/var/log/containers/horizon:/var/log/horizon\', \'/var/log/containers/httpd/horizon:/var/log/httpd\',\n \'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard\']\n keystone_init_log:\n command: [/bin/bash, -c, \'chown -R keystone:keystone /var/log/keystone\']\n image: 192.168.24.1:8787/rhosp13/openstack-keystone:2018-07-13.1\n start_order: 1\n user: root\n volumes: [\'/var/log/containers/keystone:/var/log/keystone\', \'/var/log/containers/httpd/keystone:/var/log/httpd\']\n mysql_init_bundle:\n command: [/docker_puppet_apply.sh, \'2\', \'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user\',\n \'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle\',\n --debug]\n detach: false\n environment: [TRIPLEO_DEPLOY_IDENTIFIER=1532514654]\n image: 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro\',\n \'/etc/puppet:/tmp/puppet-etc:ro\', \'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\',\n \'/var/lib/mysql:/var/lib/mysql:rw\']\n mysql_restart_bundle:\n command: [/usr/bin/bootstrap_host_exec, mysql, if /usr/sbin/pcs resource\n show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle;\n echo "galera-bundle restart invoked"; fi]\n config_volume: mysql\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\n net: host\n start_order: 0\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\',\n \'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro\']\n neutron_init_logs:\n command: [/bin/bash, -c, \'chown -R neutron:neutron /var/log/neutron\']\n image: 192.168.24.1:8787/rhosp13/openstack-neutron-server-opendaylight:2018-07-13.1\n privileged: false\n user: root\n volumes: [\'/var/log/containers/neutron:/var/log/neutron\', \'/var/log/containers/httpd/neutron-api:/var/log/httpd\']\n nova_api_init_logs:\n command: [/bin/bash, -c, \'chown -R nova:nova /var/log/nova\']\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n privileged: false\n user: root\n volumes: [\'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-api:/var/log/httpd\']\n nova_metadata_init_log:\n command: [/bin/bash, -c, \'chown -R nova:nova /var/log/nova\']\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n privileged: false\n user: root\n volumes: [\'/var/log/containers/nova:/var/log/nova\']\n nova_placement_init_log:\n command: [/bin/bash, -c, \'chown -R nova:nova /var/log/nova\']\n image: 192.168.24.1:8787/rhosp13/openstack-nova-placement-api:2018-07-13.1\n start_order: 1\n user: root\n volumes: [\'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-placement:/var/log/httpd\']\n panko_init_log:\n command: [/bin/bash, -c, \'chown -R panko:panko /var/log/panko\']\n image: 192.168.24.1:8787/rhosp13/openstack-panko-api:2018-07-13.1\n user: root\n volumes: [\'/var/log/containers/panko:/var/log/panko\', \'/var/log/containers/httpd/panko-api:/var/log/httpd\']\n rabbitmq_init_bundle:\n command: [/docker_puppet_apply.sh, \'2\', \'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready\',\n \'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle\',\n --debug]\n detach: false\n environment: [TRIPLEO_DEPLOY_IDENTIFIER=1532514654]\n image: 192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro\',\n \'/etc/puppet:/tmp/puppet-etc:ro\', \'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\',\n \'/bin/true:/bin/epmd\']\n rabbitmq_restart_bundle:\n command: [/usr/bin/bootstrap_host_exec, rabbitmq, if /usr/sbin/pcs resource\n show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600\n rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi]\n config_volume: rabbitmq\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-07-13.1\n net: host\n start_order: 0\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\',\n \'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro\']\n redis_init_bundle:\n command: [/docker_puppet_apply.sh, \'2\', \'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation\',\n \'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle\',\n --debug]\n config_volume: redis_init_bundle\n detach: false\n environment: [TRIPLEO_DEPLOY_IDENTIFIER=1532514654]\n image: 192.168.24.1:8787/rhosp13/openstack-redis:2018-07-13.1\n net: host\n start_order: 2\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro\',\n \'/etc/puppet:/tmp/puppet-etc:ro\', \'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\']\n redis_restart_bundle:\n command: [/usr/bin/bootstrap_host_exec, redis, if /usr/sbin/pcs resource\n show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle;\n echo "redis-bundle restart invoked"; fi]\n config_volume: redis\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-redis:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\',\n \'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro\']\n step_3:\n aodh_db_sync:\n command: /usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-07-13.1\n net: host\n privileged: false\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro\', \'/var/log/containers/aodh:/var/log/aodh\',\n \'/var/log/containers/httpd/aodh-api:/var/log/httpd\']\n ceilometer_init_log:\n command: [/bin/bash, -c, \'chown -R ceilometer:ceilometer /var/log/ceilometer\']\n image: 192.168.24.1:8787/rhosp13/openstack-ceilometer-notification:2018-07-13.1\n start_order: 0\n user: root\n volumes: [\'/var/log/containers/ceilometer:/var/log/ceilometer\']\n cinder_api_db_sync:\n command: [/usr/bin/bootstrap_host_exec, cinder_api, su cinder -s /bin/bash\n -c \'cinder-manage db sync --bump-versions\']\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-07-13.1\n net: host\n privileged: false\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro\', \'/var/log/containers/cinder:/var/log/cinder\',\n \'/var/log/containers/httpd/cinder-api:/var/log/httpd\']\n cinder_volume_init_logs:\n command: [/bin/bash, -c, \'chown -R cinder:cinder /var/log/cinder\']\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-07-13.1\n privileged: false\n start_order: 0\n user: root\n volumes: [\'/var/log/containers/cinder:/var/log/cinder\']\n glance_api_db_sync:\n command: /usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash\n -c \'/usr/local/bin/kolla_start\'\n detach: false\n environment: [KOLLA_BOOTSTRAP=True, KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-glance-api:2018-07-13.1\n net: host\n privileged: false\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/glance:/var/log/glance\', \'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json\',\n \'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro\',\n \'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro\', \'\', \'\']\n heat_engine_db_sync:\n command: /usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c\n \'heat-manage db_sync\'\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-heat-engine:2018-07-13.1\n net: host\n privileged: false\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/heat:/var/log/heat\', \'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro\']\n horizon:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS, ENABLE_IRONIC=yes, ENABLE_MANILA=yes,\n ENABLE_MISTRAL=yes, ENABLE_OCTAVIA=yes, ENABLE_SAHARA=yes, ENABLE_CLOUDKITTY=no,\n ENABLE_FREEZER=no, ENABLE_FWAAS=no, ENABLE_KARBOR=no, ENABLE_DESIGNATE=no,\n ENABLE_MAGNUM=no, ENABLE_MURANO=no, ENABLE_NEUTRON_LBAAS=no, ENABLE_SEARCHLIGHT=no,\n ENABLE_SENLIN=no, ENABLE_SOLUM=no, ENABLE_TACKER=no, ENABLE_TROVE=no,\n ENABLE_WATCHER=no, ENABLE_ZAQAR=no, ENABLE_ZUN=no]\n image: 192.168.24.1:8787/rhosp13/openstack-horizon:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/horizon:/var/log/horizon\', \'/var/log/containers/httpd/horizon:/var/log/httpd\',\n \'\', \'\']\n iscsid:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-iscsid:2018-07-13.1\n net: host\n privileged: true\n restart: always\n start_order: 2\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/dev/:/dev/\', \'/run/:/run/\', \'/sys:/sys\', \'/lib/modules:/lib/modules:ro\',\n \'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro\']\n keystone:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-keystone:2018-07-13.1\n net: host\n privileged: false\n restart: always\n start_order: 2\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/keystone:/var/log/keystone\', \'/var/log/containers/httpd/keystone:/var/log/httpd\',\n \'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro\',\n \'\', \'\']\n keystone_bootstrap:\n action: exec\n command: [keystone, /usr/bin/bootstrap_host_exec, keystone, keystone-manage,\n bootstrap, --bootstrap-password, XxK3Mh947xh2TVyaJJWb7myna]\n start_order: 3\n user: root\n keystone_cron:\n command: [/bin/bash, -c, /usr/local/bin/kolla_set_configs && /usr/sbin/crond\n -n]\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-keystone:2018-07-13.1\n net: host\n privileged: false\n restart: always\n start_order: 4\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/keystone:/var/log/keystone\', \'/var/log/containers/httpd/keystone:/var/log/httpd\',\n \'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro\']\n keystone_db_sync:\n command: [/usr/bin/bootstrap_host_exec, keystone, /usr/local/bin/kolla_start]\n detach: false\n environment: [KOLLA_BOOTSTRAP=True, KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-keystone:2018-07-13.1\n net: host\n privileged: false\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/keystone:/var/log/keystone\', \'/var/log/containers/httpd/keystone:/var/log/httpd\',\n \'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro\',\n \'\', \'\']\n neutron_db_sync:\n command: [/usr/bin/bootstrap_host_exec, neutron_api, neutron-db-manage,\n upgrade, heads]\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-neutron-server-opendaylight:2018-07-13.1\n net: host\n privileged: false\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/neutron:/var/log/neutron\', \'/var/log/containers/httpd/neutron-api:/var/log/httpd\',\n \'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro\', \'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro\']\n nova_api_db_sync:\n command: /usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c \'/usr/bin/nova-manage\n api_db sync\'\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n net: host\n start_order: 0\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-api:/var/log/httpd\',\n \'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro\']\n nova_api_ensure_default_cell:\n command: /usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n net: host\n start_order: 2\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-api:/var/log/httpd\',\n \'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro\', \'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro\', \'/var/log/containers/nova:/var/log/nova\',\n \'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro\']\n nova_api_map_cell0:\n command: /usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c \'/usr/bin/nova-manage\n cell_v2 map_cell0\'\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-api:/var/log/httpd\',\n \'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro\']\n nova_db_sync:\n command: /usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c \'/usr/bin/nova-manage\n db sync\'\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n net: host\n start_order: 3\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-api:/var/log/httpd\',\n \'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro\']\n nova_placement:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-nova-placement-api:2018-07-13.1\n net: host\n restart: always\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-placement:/var/log/httpd\',\n \'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro\',\n \'\', \'\']\n panko_db_sync:\n command: /usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c\n \'/usr/bin/panko-dbsync \'\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-panko-api:2018-07-13.1\n net: host\n privileged: false\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/panko:/var/log/panko\', \'/var/log/containers/httpd/panko-api:/var/log/httpd\',\n \'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/panko/etc/panko:/etc/panko:ro\']\n swift_copy_rings:\n command: [/bin/bash, -c, cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz\n /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups]\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-07-13.1\n user: root\n volumes: [\'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw\',\n \'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro\']\n swift_setup_srv:\n command: [chown, -R, \'swift:\', /srv/node]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-account:2018-07-13.1\n user: root\n volumes: [\'/srv/node:/srv/node\']\n step_4:\n aodh_api:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/aodh:/var/log/aodh\', \'/var/log/containers/httpd/aodh-api:/var/log/httpd\',\n \'\', \'\']\n aodh_evaluator:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-aodh-evaluator:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/aodh:/var/log/aodh\']\n aodh_listener:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-aodh-listener:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/aodh:/var/log/aodh\']\n aodh_notifier:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-aodh-notifier:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/aodh:/var/log/aodh\']\n ceilometer_agent_central:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/ceilometer:/var/log/ceilometer\']\n ceilometer_agent_notification:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-ceilometer-notification:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro\',\n \'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro\',\n \'/var/log/containers/ceilometer:/var/log/ceilometer\']\n cinder_api:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-07-13.1\n net: host\n privileged: false\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/cinder:/var/log/cinder\', \'/var/log/containers/httpd/cinder-api:/var/log/httpd\',\n \'\', \'\']\n cinder_api_cron:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-07-13.1\n net: host\n privileged: false\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/cinder:/var/log/cinder\', \'/var/log/containers/httpd/cinder-api:/var/log/httpd\']\n cinder_scheduler:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-scheduler:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/cinder:/var/log/cinder\']\n glance_api:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-glance-api:2018-07-13.1\n net: host\n privileged: false\n restart: always\n start_order: 2\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/glance:/var/log/glance\', \'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json\',\n \'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro\',\n \'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro\', \'\', \'\']\n gnocchi_db_sync:\n detach: false\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-07-13.1\n net: host\n privileged: false\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro\',\n \'/var/lib/gnocchi:/var/lib/gnocchi:rw\', \'/var/log/containers/gnocchi:/var/log/gnocchi\',\n \'/var/log/containers/httpd/gnocchi-api:/var/log/httpd\', \'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro\']\n heat_api:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-heat-api:2018-07-13.1\n net: host\n privileged: false\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/heat:/var/log/heat\', \'/var/log/containers/httpd/heat-api:/var/log/httpd\',\n \'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro\',\n \'\', \'\']\n heat_api_cfn:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-heat-api-cfn:2018-07-13.1\n net: host\n privileged: false\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/heat:/var/log/heat\', \'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd\',\n \'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro\',\n \'\', \'\']\n heat_api_cron:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-heat-api:2018-07-13.1\n net: host\n privileged: false\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/heat:/var/log/heat\', \'/var/log/containers/httpd/heat-api:/var/log/httpd\',\n \'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro\']\n heat_engine:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-heat-engine:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/heat:/var/log/heat\', \'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro\']\n keystone_refresh:\n action: exec\n command: [keystone, pkill, --signal, USR1, httpd]\n start_order: 1\n user: root\n logrotate_crond:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-cron:2018-07-13.1\n net: none\n pid: host\n privileged: true\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers:/var/log/containers\']\n neutron_api:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-neutron-server-opendaylight:2018-07-13.1\n net: host\n privileged: false\n restart: always\n start_order: 0\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/neutron:/var/log/neutron\', \'/var/log/containers/httpd/neutron-api:/var/log/httpd\',\n \'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro\']\n neutron_dhcp:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-neutron-dhcp-agent:2018-07-13.1\n net: host\n pid: host\n privileged: true\n restart: always\n start_order: 10\n ulimit: [nofile=1024]\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/neutron:/var/log/neutron\', \'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro\',\n \'/lib/modules:/lib/modules:ro\', \'/run/openvswitch:/run/openvswitch\', \'/var/lib/neutron:/var/lib/neutron\',\n \'/run/netns:/run/netns:shared\', \'/var/lib/openstack:/var/lib/openstack\',\n \'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro\', \'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro\']\n neutron_metadata_agent:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-neutron-metadata-agent:2018-07-13.1\n net: host\n pid: host\n privileged: true\n restart: always\n start_order: 10\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/neutron:/var/log/neutron\', \'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro\',\n \'/lib/modules:/lib/modules:ro\', \'/var/lib/neutron:/var/lib/neutron\']\n nova_api:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n net: host\n privileged: true\n restart: always\n start_order: 2\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-api:/var/log/httpd\',\n \'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro\',\n \'\', \'\']\n nova_api_cron:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n net: host\n privileged: false\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-api:/var/log/httpd\',\n \'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro\']\n nova_conductor:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-nova-conductor:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro\']\n nova_consoleauth:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-nova-consoleauth:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro\']\n nova_metadata:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n net: host\n privileged: true\n restart: always\n start_order: 2\n user: nova\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro\']\n nova_scheduler:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-nova-scheduler:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro\',\n \'/run:/run\']\n nova_vnc_proxy:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-nova-novncproxy:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro\']\n panko_api:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-panko-api:2018-07-13.1\n net: host\n privileged: false\n restart: always\n start_order: 2\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/panko:/var/log/panko\', \'/var/log/containers/httpd/panko-api:/var/log/httpd\',\n \'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro\',\n \'\', \'\']\n swift_account_auditor:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-account:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_account_reaper:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-account:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_account_replicator:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-account:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_account_server:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-swift-account:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_container_auditor:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-container:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_container_replicator:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-container:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_container_server:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-swift-container:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_container_updater:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-container:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_object_auditor:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-object:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_object_expirer:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_object_replicator:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-object:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_object_server:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-swift-object:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_object_updater:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-object:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_proxy:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-07-13.1\n net: host\n restart: always\n start_order: 2\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/run:/run\', \'/srv/node:/srv/node\', \'/dev:/dev\']\n swift_rsync:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-object:2018-07-13.1\n net: host\n privileged: true\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\']\n step_5:\n ceilometer_gnocchi_upgrade:\n command: [/usr/bin/bootstrap_host_exec, ceilometer_agent_central, \'su ceilometer\n -s /bin/bash -c \'\'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database\n && exit 0 || sleep 5; done; exit 1\'\'\']\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-07-13.1\n net: host\n privileged: false\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro\',\n \'/var/log/containers/ceilometer:/var/log/ceilometer\']\n cinder_volume_init_bundle:\n command: [/docker_puppet_apply.sh, \'5\', \'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location\',\n \'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle\',\n --debug --verbose]\n detach: false\n environment: [TRIPLEO_DEPLOY_IDENTIFIER=1532514654]\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro\',\n \'/etc/puppet:/tmp/puppet-etc:ro\', \'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\']\n cinder_volume_restart_bundle:\n command: [/usr/bin/bootstrap_host_exec, cinder_volume, if /usr/sbin/pcs\n resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart\n --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart\n invoked"; fi]\n config_volume: cinder\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-07-13.1\n net: host\n start_order: 0\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\',\n \'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro\']\n gnocchi_api:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/gnocchi:/var/lib/gnocchi:rw\', \'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/gnocchi:/var/log/gnocchi\', \'/var/log/containers/httpd/gnocchi-api:/var/log/httpd\',\n \'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro\', \'\', \'\']\n gnocchi_metricd:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-gnocchi-metricd:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro\',\n \'/var/lib/gnocchi:/var/lib/gnocchi:rw\', \'/var/log/containers/gnocchi:/var/log/gnocchi\',\n \'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro\']\n gnocchi_statsd:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-gnocchi-statsd:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/gnocchi:/var/log/gnocchi\', \'/var/lib/gnocchi:/var/lib/gnocchi:rw\',\n \'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro\']\n nova_api_discover_hosts:\n command: /usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh\n detach: false\n environment: [TRIPLEO_DEPLOY_IDENTIFIER=1532514654]\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-api:/var/log/httpd\',\n \'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro\', \'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro\', \'/var/log/containers/nova:/var/log/nova\',\n \'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro\']\n role_data_docker_config_scripts:\n create_swift_secret.sh: {content: "#!/bin/bash\\nexport OS_PROJECT_DOMAIN_ID=$(crudini\\\n \\ --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\\nexport\\\n \\ OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster\\\n \\ user_domain_id)\\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf\\\n \\ kms_keymaster project_name)\\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf\\\n \\ kms_keymaster username)\\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf\\\n \\ kms_keymaster password)\\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf\\\n \\ kms_keymaster auth_endpoint)\\nexport OS_AUTH_TYPE=password\\nexport OS_IDENTITY_API_VERSION=3\\n\\\n \\necho \\"Check if secret already exists\\"\\nsecret_href=$(openstack secret\\\n \\ list --name swift_root_secret_uuid)\\nrc=$?\\nif [[ $rc != 0 ]]; then\\n\\\n \\ echo \\"Failed to check secrets, check if Barbican in enabled and responding\\\n \\ properly\\"\\n exit $rc;\\nfi\\nif [ -z \\"$secret_href\\" ]; then\\n echo\\\n \\ \\"Create new secret\\"\\n order_href=$(openstack secret order create --name\\\n \\ swift_root_secret_uuid --payload-content-type=\\"application/octet-stream\\"\\\n \\ --algorithm aes --bit-length 256 --mode ctr key -f value -c \\"Order href\\"\\\n )\\nfi\\n", mode: \'0700\'}\n docker_puppet_apply.sh: {content: "#!/bin/bash\\nset -eux\\nSTEP=$1\\nTAGS=$2\\n\\\n CONFIG=$3\\nEXTRA_ARGS=${4:-\'\'}\\nif [ -d /tmp/puppet-etc ]; then\\n # ignore\\\n \\ copy failures as these may be the same file depending on docker mounts\\n\\\n \\ cp -a /tmp/puppet-etc/* /etc/puppet || true\\nfi\\necho \\"{\\\\\\"step\\\\\\"\\\n : ${STEP}}\\" > /etc/puppet/hieradata/docker.json\\nexport FACTER_uuid=docker\\n\\\n set +e\\npuppet apply $EXTRA_ARGS \\\\\\n --verbose \\\\\\n --detailed-exitcodes\\\n \\ \\\\\\n --summarize \\\\\\n --color=false \\\\\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules\\\n \\ \\\\\\n --tags $TAGS \\\\\\n -e \\"${CONFIG}\\"\\nrc=$?\\nset -e\\nset +ux\\n\\\n if [ $rc -eq 2 -o $rc -eq 0 ]; then\\n exit 0\\nfi\\nexit $rc\\n", mode: \'0700\'}\n nova_api_discover_hosts.sh: {content: "#!/bin/bash\\nexport OS_PROJECT_DOMAIN_NAME=$(crudini\\\n \\ --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\\nexport\\\n \\ OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken\\\n \\ user_domain_name)\\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf\\\n \\ keystone_authtoken project_name)\\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf\\\n \\ keystone_authtoken username)\\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf\\\n \\ keystone_authtoken password)\\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf\\\n \\ keystone_authtoken auth_url)\\nexport OS_AUTH_TYPE=password\\nexport OS_IDENTITY_API_VERSION=3\\n\\\n \\necho \\"(cellv2) Running cell_v2 host discovery\\"\\ntimeout=600\\nloop_wait=30\\n\\\n declare -A discoverable_hosts\\nfor host in $(hiera -c /etc/puppet/hiera.yaml\\\n \\ cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr \\",\\" \\" \\"); do discoverable_hosts[$host]=1;\\\n \\ done\\ntimeout_at=$(( $(date +\\"%s\\") + ${timeout} ))\\necho \\"(cellv2)\\\n \\ Waiting ${timeout} seconds for hosts to register\\"\\nfinished=0\\nwhile\\\n \\ : ; do\\n for host in $(openstack -q compute service list -c \'Host\' -c\\\n \\ \'Zone\' -f value | awk \'$2 != \\"internal\\" { print $1 }\'); do\\n if ((\\\n \\ discoverable_hosts[$host] == 1 )); then\\n echo \\"(cellv2) compute\\\n \\ node $host has registered\\"\\n unset discoverable_hosts[$host]\\n \\\n \\ fi\\n done\\n finished=1\\n for host in \\"${!discoverable_hosts[@]}\\"\\\n ; do\\n if (( ${discoverable_hosts[$host]} == 1 )); then\\n echo \\"\\\n (cellv2) compute node $host has not registered\\"\\n finished=0\\n \\\n \\ fi\\n done\\n remaining=$(( $timeout_at - $(date +\\"%s\\") ))\\n if ((\\\n \\ $finished == 1 )); then\\n echo \\"(cellv2) All nodes registered\\"\\n\\\n \\ break\\n elif (( $remaining <= 0 )); then\\n echo \\"(cellv2) WARNING:\\\n \\ timeout waiting for nodes to register, running host discovery regardless\\"\\\n \\n echo \\"(cellv2) Expected host list:\\" $(hiera -c /etc/puppet/hiera.yaml\\\n \\ cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\\n\\\n \\ echo \\"(cellv2) Detected host list:\\" $(openstack -q compute service\\\n \\ list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != \\"internal\\" { print $1\\\n \\ }\' | sort -u | tr \'\\\\n\', \' \')\\n break\\n else\\n echo \\"(cellv2)\\\n \\ Waiting ${remaining} seconds for hosts to register\\"\\n sleep $loop_wait\\n\\\n \\ fi\\ndone\\necho \\"(cellv2) Running host discovery...\\"\\nsu nova -s /bin/bash\\\n \\ -c \\"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\\"\\\n \\n", mode: \'0700\'}\n nova_api_ensure_default_cell.sh: {content: "#!/bin/bash\\nDEFID=$(nova-manage\\\n \\ cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == \\"\\\n default\\" {print $4}\')\\nif [ \\"$DEFID\\" ]; then\\n echo \\"(cellv2) Updating\\\n \\ default cell_v2 cell $DEFID\\"\\n su nova -s /bin/bash -c \\"/usr/bin/nova-manage\\\n \\ cell_v2 update_cell --cell_uuid $DEFID --name=default\\"\\nelse\\n echo\\\n \\ \\"(cellv2) Creating default cell_v2 cell\\"\\n su nova -s /bin/bash -c\\\n \\ \\"/usr/bin/nova-manage cell_v2 create_cell --name=default\\"\\nfi\\n", mode: \'0700\'}\n set_swift_keymaster_key_id.sh: {content: "#!/bin/bash\\nexport OS_PROJECT_DOMAIN_ID=$(crudini\\\n \\ --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\\nexport\\\n \\ OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster\\\n \\ user_domain_id)\\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf\\\n \\ kms_keymaster project_name)\\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf\\\n \\ kms_keymaster username)\\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf\\\n \\ kms_keymaster password)\\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf\\\n \\ kms_keymaster auth_endpoint)\\nexport OS_AUTH_TYPE=password\\nexport OS_IDENTITY_API_VERSION=3\\n\\\n echo \\"retrieve key_id\\"\\nloop_wait=2\\nfor i in {0..5}; do\\n #TODO update\\\n \\ uuid from mistral here too\\n secret_href=$(openstack secret list --name\\\n \\ swift_root_secret_uuid)\\n if [ \\"$secret_href\\" ]; then\\n echo \\"\\\n set key_id in keymaster.conf\\"\\n secret_href=$(openstack secret list\\\n \\ --name swift_root_secret_uuid -f value -c \\"Secret href\\")\\n crudini\\\n \\ --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\\n\\\n \\ exit 0\\n else\\n echo \\"no key, wait for $loop_wait and check again\\"\\\n \\n sleep $loop_wait\\n ((loop_wait++))\\n fi\\ndone\\necho \\"Failed to\\\n \\ set secret in keymaster.conf, check if Barbican is enabled and responding\\\n \\ properly\\"\\nexit 1\\n", mode: \'0700\'}\n role_data_docker_puppet_tasks:\n step_3:\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-keystone:2018-07-13.1\',\n config_volume: keystone_init_tasks, puppet_tags: \'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain\',\n step_config: \'include ::tripleo::profile::base::keystone\'}\n role_data_external_deploy_tasks: []\n role_data_external_post_deploy_tasks: []\n role_data_fast_forward_post_upgrade_tasks:\n - name: Register repo type and args\n set_fact:\n fast_forward_repo_args:\n tripleo_repos: {ocata: -b ocata current, pike: -b pike current, queens: -b\n queens current}\n fast_forward_repo_type: custom-script\n - debug: {msg: \'fast_forward_repo_type: {{ fast_forward_repo_type }} fast_forward_repo_args:\n {{ fast_forward_repo_args }}\'}\n - block:\n - git: {dest: /home/stack/tripleo-repos/, repo: \'https://github.com/openstack/tripleo-repos.git\'}\n name: clone tripleo-repos\n - args: {chdir: /home/stack/tripleo-repos/}\n command: python setup.py install\n name: install tripleo-repos\n - {command: \'tripleo-repos {{ fast_forward_repo_args.tripleo_repos[release]\n }}\', name: Enable tripleo-repos}\n when: [ffu_packages_apply|bool, fast_forward_repo_type == \'tripleo-repos\']\n - block:\n - copy: {content: "#!/bin/bash\\nset -e\\necho \\"If you use FastForwardRepoType\\\n \\ \'custom-script\' you have to provide the upgrade repo script content.\\"\\\n \\necho \\"It will be installed as /root/ffu_upgrade_repo.sh on the node\\"\\\n \\necho \\"and passed the upstream name (ocata, pike, queens) of the release\\\n \\ as first argument\\"\\ncase $1 in\\n ocata)\\n subscription-manager\\\n \\ repos --disable=rhel-7-server-openstack-10-rpms\\n subscription-manager\\\n \\ repos --enable=rhel-7-server-openstack-11-rpms\\n ;;\\n pike)\\n \\\n \\ subscription-manager repos --disable=rhel-7-server-openstack-11-rpms\\n\\\n \\ subscription-manager repos --enable=rhel-7-server-openstack-12-rpms\\n\\\n \\ ;;\\n queens)\\n subscription-manager repos --disable=rhel-7-server-openstack-12-rpms\\n\\\n \\ subscription-manager repos --enable=rhel-7-server-openstack-13-rpms\\n\\\n \\ ;;\\n *)\\n echo \\"unknown release $1\\" >&2\\n exit 1\\nesac\\n",\n dest: /root/ffu_update_repo.sh, mode: 448}\n name: Create custom Script for upgrading repo.\n - {name: Execute custom script for upgrading repo., shell: \'/root/ffu_update_repo.sh\n {{release}}\'}\n when: [ffu_packages_apply|bool, fast_forward_repo_type == \'custom-script\']\n role_data_fast_forward_upgrade_tasks:\n - ignore_errors: true\n name: Check for aodh running under apache\n register: aodh_httpd_enabled_result\n shell: httpd -t -D DUMP_VHOSTS | grep -q aodh_wsgi\n tags: common\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact aodh_httpd_enabled\n set_fact: {aodh_httpd_enabled: \'{{ aodh_httpd_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - command: systemctl is-active --quiet httpd\n ignore_errors: true\n name: Check if httpd is running\n register: httpd_running_result\n when: [step|int == 0, release == \'ocata\', httpd_running is undefined]\n - name: Set fact httpd_running if undefined\n set_fact: {httpd_running: \'{{ httpd_running_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\', httpd_running is undefined]\n - name: Stop and disable aodh (under httpd)\n service: name=httpd state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', aodh_httpd_enabled|bool, httpd_running|bool]\n - name: Aodh package update\n shell: yum -y update openstack-aodh*\n when: [step|int == 6, is_bootstrap_node|bool, aodh_httpd_enabled|bool]\n - command: aodh-dbsync\n name: aodh db sync\n when: [step|int == 8, is_bootstrap_node|bool, aodh_httpd_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-aodh-evaluator\n ignore_errors: true\n name: FFU check if openstack-aodh-evaluator is deployed\n register: aodh_evaluator_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact aodh_evaluator_enabled\n set_fact: {aodh_evaluator_enabled: \'{{ aodh_evaluator_enabled_result.rc == 0\n }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-aodh-evaluator service\n service: name=openstack-aodh-evaluator state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', aodh_evaluator_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-aodh-listener\n ignore_errors: true\n name: FFU check if openstack-aodh-listener is deployed\n register: aodh_listener_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact aodh_listener_enabled\n set_fact: {aodh_listener_enabled: \'{{ aodh_listener_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-aodh-listener service\n service: name=openstack-aodh-listener state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', aodh_listener_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-aodh-notifier\n ignore_errors: true\n name: FFU check if openstack-aodh-notifier is deployed\n register: aodh_notifier_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact aodh_notifier_enabled\n set_fact: {aodh_notifier_enabled: \'{{ aodh_notifier_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-aodh-notifier service\n service: name=openstack-aodh-notifier state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', aodh_notifier_enabled|bool]\n - file: path=/etc/httpd/conf.d/10-ceilometer_wsgi.conf state=absent\n name: Purge Ceilometer apache config files\n when: [step|int == 1, release == \'ocata\']\n - lineinfile: dest=/etc/httpd/conf/ports.conf state=absent regexp="8777$"\n name: Clean up ceilometer port from ports.conf\n when: [step|int == 1, release == \'ocata\']\n - command: systemctl is-enabled --quiet openstack-ceilometer-collector\n ignore_errors: true\n name: FFU check if openstack-ceilometer-collector is deployed\n register: ceilometer_agent_collector_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact ceilometer_agent_collector_enabled\n set_fact: {ceilometer_agent_collector_enabled: \'{{ ceilometer_agent_collector_enabled_result.rc\n == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable ceilometer_collector service on upgrade\n service: name=openstack-ceilometer-collector state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', ceilometer_agent_collector_enabled|bool]\n - changed_when: [step|int == 1, release == \'ocata\', remove_ceilometer_expirer_crontab.stderr\n != "no crontab for ceilometer"]\n failed_when: [step|int == 1, release == \'ocata\', remove_ceilometer_expirer_crontab.rc\n != 0, remove_ceilometer_expirer_crontab.stderr != "no crontab for ceilometer"]\n name: Remove ceilometer expirer cron tab on upgrade\n register: remove_ceilometer_expirer_crontab\n shell: /usr/bin/crontab -u ceilometer -r\n - command: systemctl is-enabled --quiet openstack-ceilometer-central\n ignore_errors: true\n name: FFU check if openstack-ceilometer-central is deployed\n register: ceilometer_agent_central_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact ceilometer_agent_central_enabled\n set_fact: {ceilometer_agent_central_enabled: \'{{ ceilometer_agent_central_enabled_result.rc\n == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-ceilometer-central service\n service: name=openstack-ceilometer-central state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', ceilometer_agent_central_enabled|bool]\n - command: systemctl is-enabled openstack-ceilometer-notification\n ignore_errors: true\n name: FFU check if openstack-ceilometer-notification is deployed\n register: ceilometer_agent_notification_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact ceilometer_agent_notification_enabled\n set_fact: {ceilometer_agent_notification_enabled: \'{{ ceilometer_agent_notification_enabled_result.rc\n == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and diable openstack-ceilometer-notification service\n service: name=openstack-ceilometer-notification state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', ceilometer_agent_notification_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-cinder-api\n ignore_errors: true\n name: Check is cinder_api is deployed\n register: cinder_api_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact cinder_api_enabled\n set_fact: {cinder_api_enabled: \'{{ cinder_api_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop openstack-cinder-api\n service: name=openstack-cinder-api state=stopped\n when: [step|int == 1, release == \'ocata\', cinder_api_enabled|bool]\n - name: Extra removal of services for cinder\n shell: \'cinder-manage service list |\\\n\n grep -v Binary | tr \'\'@\'\' \'\' \'\' |\\\n\n awk \'\'{print $1 " " $2}\'\' |\\\n\n while read i ; do cinder-manage service remove $i ; done\n\n \'\n when: [step|int == 5, release == \'pike\', is_bootstrap_node|bool]\n - command: cinder-manage db online_data_migrations\n name: Extra migration for cinder\n when: [step|int == 5, release == \'pike\', is_bootstrap_node|bool]\n - name: Cinder package update\n shell: yum -y update openstack-cinder*\n when: [step|int == 6, is_bootstrap_node|bool]\n - command: cinder-manage db sync\n name: Cinder db sync\n when: [step|int == 8, is_bootstrap_node|bool]\n - command: systemctl is-enabled --quiet openstack-cinder-scheduler\n ignore_errors: true\n name: Check if cinder_scheduler is deployed\n register: cinder_scheduler_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact cinder_scheduler_enabled\n set_fact: {cinder_scheduler_enabled: \'{{ cinder_scheduler_enabled_result.rc\n == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop openstack-cinder-scheduler\n service: name=openstack-cinder-scheduler state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', cinder_scheduler_enabled|bool]\n - ignore_errors: true\n name: Check cluster resource status\n pacemaker_resource: {check_mode: false, resource: openstack-cinder-volume, state: show}\n register: cinder_volume_res_result\n when: [step|int == 0, release == \'ocata\', is_bootstrap_node|bool]\n - name: Set fact cinder_volume_res\n set_fact: {cinder_volume_res: \'{{ cinder_volume_res_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\', is_bootstrap_node|bool]\n - name: Disable the openstack-cinder-volume cluster resource\n pacemaker_resource: {resource: openstack-cinder-volume, state: disable, wait_for_resource: true}\n register: cinder_volume_output\n retries: 5\n until: cinder_volume_output.rc == 0\n when: [step|int == 2, release == \'ocata\', is_bootstrap_node|bool, cinder_volume_res|bool]\n - command: systemctl is-enabled --quiet openstack-glance-api\n ignore_errors: true\n name: Check if glance_api is deployed\n register: glance_api_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact glance_api_enabled\n set_fact: {glance_api_enabled: \'{{ glance_api_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop openstack-glance-api\n service: name=openstack-glance-api state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', glance_api_enabled|bool]\n - name: glance package update\n when: [step|int == 6, is_bootstrap_node|bool]\n yum: name=openstack-glance state=latest\n - command: glance-manage db_sync\n name: glance db sync\n when: [step|int == 8, is_bootstrap_node|bool]\n - command: systemctl is-enabled --quiet openstack-glance-registry\n ignore_errors: true\n name: Check if glance_registry is deployed\n register: glance_registry_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact glance_registry_enabled\n set_fact: {glance_registry_enabled: \'{{ glance_registry_enabled_result.rc ==\n 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop openstack-glance-registry\n service: name=openstack-glance-registry state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', glance_registry_enabled|bool]\n - command: systemctl is-active --quiet httpd\n ignore_errors: true\n name: Check if httpd service is running\n register: httpd_running_result\n tags: common\n when: [step|int == 0, release == \'ocata\', httpd_running is undefined]\n - name: Set fact httpd_running if unset\n set_fact: {httpd_running: \'{{ httpd_running_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\', httpd_running is undefined]\n - command: systemctl is-enabled --quiet openstack-gnocchi-api\n ignore_errors: true\n name: Check if gnocchi_api is deployed\n register: gnocchi_api_enabled_result\n tags: common\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact gnocchi_api_enabled\n set_fact: {gnocchi_api_enabled: \'{{ gnocchi_api_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - ignore_errors: true\n name: Check for gnocchi_api running under apache\n register: gnocchi_httpd_enabled_result\n shell: httpd -t -D DUMP_VHOSTS | grep -q gnocchi\n tags: common\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact gnocchi_httpd_enabled\n set_fact: {gnocchi_httpd_enabled: \'{{ gnocchi_httpd_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable gnocchi_api service\n service: name=openstack-gnocchi-api state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', gnocchi_api_enabled|bool]\n - name: Stop and disable httpd service\n service: name=httpd state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', gnocchi_httpd_enabled|bool, httpd_running|bool]\n - name: Update gnocchi packages\n when: [step|int == 6, is_bootstrap_node|bool]\n with_items: [openstack-gnocchi*, numpy]\n yum: name={{ item }} state=latest\n - command: gnocchi-upgrade --skip-storage\n name: Sync gnocchi DB\n when: [step|int == 8, is_bootstrap_node|bool]\n - command: systemctl is-enabled --quiet openstack-gnocchi-metricd\n ignore_errors: true\n name: FFU check if openstack-gnocchi-metricd is deployed\n register: gnocchi_metricd_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact gnocchi_metricd_enabled\n set_fact: {gnocchi_metricd_enabled: \'{{ gnocchi_metricd_enabled_result.rc ==\n 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-gnocchi-metricd service\n service: name=openstack-gnocchi-metricd state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', gnocchi_metricd_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-gnocchi-statsd\n ignore_errors: true\n name: FFU check if openstack-gnocchi-statsd is deployed\n register: gnocchi_statsd_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact gnocchi_statsd_enabled\n set_fact: {gnocchi_statsd_enabled: \'{{ gnocchi_statsd_enabled_result.rc == 0\n }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-gnocchi-statsd service\n service: name=openstack-gnocchi-statsd state=stopped enabled=no\n when: [step|int == 2, release == \'ocata\', gnocchi_statsd_enabled|bool]\n - command: systemctl is-enabled openstack-heat-api\n ignore_errors: true\n name: FFU check openstack-heat-api is enabled\n register: heat_api_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact heat_api_enabled\n set_fact: {heat_api_enabled: \'{{ heat_api_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-heat-api\n service: name=openstack-heat-api state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', heat_api_enabled|bool]\n - name: FFU Heat package update\n shell: yum -y update openstack-heat*\n when: [step|int == 6, is_bootstrap_node|bool]\n - command: heat-manage db_sync\n name: FFU Heat db-sync\n when: [step|int == 8, is_bootstrap_node|bool]\n - command: systemctl is-enabled openstack-heat-api-cloudwatch\n ignore_errors: true\n name: FFU check if heat_api_cloudwatch is deployed\n register: heat_api_cloudwatch_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact heat_api_cloudwatch_enabled\n set_fact: {heat_api_cloudwatch_enabled: \'{{ heat_api_cloudwatch_enabled_result.rc\n == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable the heat-api-cloudwatch service.\n service: name=openstack-heat-api-cloudwatch state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', heat_api_cloudwatch_enabled|bool]\n - ignore_errors: true\n name: Remove heat_api_cloudwatch package\n when: [step|int == 2, release == \'ocata\']\n yum: name=openstack-heat-api-cloudwatch state=removed\n - command: systemctl is-enabled openstack-heat-api-cfn\n ignore_errors: true\n name: FFU check if openstack-heat-api-cfn service is enabled\n register: heat_api_cfn_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact heat_api_cfn_enabled\n set_fact: {heat_api_cfn_enabled: \'{{ heat_api_cfn_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-heat-api-cfn service\n service: name=openstack-heat-api-cfn state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', heat_api_cfn_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-heat-engine\n ignore_errors: true\n name: FFU check if openstack-heat-engine is enabled\n register: heat_engine_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact heat_engine_enabled\n set_fact: {heat_engine_enabled: \'{{ heat_engine_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-heat-engine service\n service: name=openstack-heat-engine state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', heat_engine_enabled|bool]\n - ignore_errors: true\n name: Check for keystone running under apache\n register: keystone_httpd_enabled_result\n shell: httpd -t -D DUMP_VHOSTS | grep -q keystone_wsgi\n tags: common\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact keystone_httpd_enabled\n set_fact: {keystone_httpd_enabled: \'{{ keystone_httpd_enabled_result.rc == 0\n }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable keystone (under httpd)\n service: name=httpd state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', keystone_httpd_enabled|bool, httpd_running|bool]\n - name: Keystone package update\n shell: yum -y update openstack-keystone*\n when: [step|int == 6, is_bootstrap_node|bool]\n - command: keystone-manage db_sync\n name: keystone db sync\n when: [step|int == 8, is_bootstrap_node|bool]\n - command: systemctl is-enabled --quiet memcached\n ignore_errors: true\n name: Check if memcached is deployed\n register: memcached_enabled_result\n tags: common\n when: [step|int == 0, release == \'ocata\']\n - name: memcached_enabled\n set_fact: {memcached_enabled: \'{{ memcached_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable memcached service\n service: name=memcached state=stopped enabled=no\n when: [step|int == 2, release == \'ocata\', memcached_enabled|bool]\n - command: systemctl is-enabled --quiet neutron-server\n ignore_errors: true\n name: Check if neutron_server is deployed\n register: neutron_server_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact neutron_server_enabled\n set_fact: {neutron_server_enabled: \'{{ neutron_server_enabled_result.rc == 0\n }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop neutron_server\n service: name=neutron-server state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', neutron_server_enabled|bool]\n - name: Neutron package update\n shell: yum -y update openstack-neutron*\n when: [step|int == 6, is_bootstrap_node|bool]\n - name: Neutron package update workaround\n when: [step|int == 6, is_bootstrap_node|bool]\n yum: name=python-networking-odl state=latest\n - command: neutron-db-manage upgrade head\n name: Neutron db sync\n when: [step|int == 8, is_bootstrap_node|bool]\n - command: systemctl is-enabled --quiet neutron-dhcp-agent\n ignore_errors: true\n name: Check if neutron_dhcp_agent is deployed\n register: neutron_dhcp_agent_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact neutron_dhcp_agent_enabled\n set_fact: {neutron_dhcp_agent_enabled: \'{{ neutron_dhcp_agent_enabled_result.rc\n == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop neutron_dhcp_agent\n service: name=neutron-dhcp-agent state=stopped enabled=no\n when: [step|int == 2, release == \'ocata\', neutron_dhcp_agent_enabled|bool]\n - command: systemctl is-enabled --quiet neutron-metadata-agent\n ignore_errors: true\n name: Check if neutron_metadata_agent is deployed\n register: neutron_metadata_agent_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact neutron_metadata_agent_enabled\n set_fact: {neutron_metadata_agent_enabled: \'{{ neutron_metadata_agent_enabled_result.rc\n == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop neutron_metadata_agent\n service: name=neutron-metadata-agent state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', neutron_metadata_agent_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-nova-api\n ignore_errors: true\n name: Check if nova-api is deployed\n register: nova_api_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact nova_api_enabled\n set_fact: {nova_api_enabled: \'{{ nova_api_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop openstack-nova-api service\n service: name=openstack-nova-api state=stopped\n when: [step|int == 1, nova_api_enabled|bool, release == \'ocata\']\n - command: nova-manage db online_data_migrations\n name: Extra migration for nova tripleo/+bug/1656791\n when: [step|int == 5, release == \'ocata\', is_bootstrap_node|bool]\n - command: yum update -y *nova*\n name: Update nova packages\n when: [step|int == 6, is_bootstrap_node|bool]\n - block:\n - mysql_db: {name: nova_cell0, state: present}\n name: Create cell0 db\n - mysql_user: {host_all: true, name: nova, priv: \'*.*:ALL\', state: present}\n name: Grant access to cell0 db\n - copy: {content: "$transport_url = os_transport_url({\\n \'transport\' => hiera(\'messaging_service_name\',\\\n \\ \'rabbit\'),\\n \'hosts\' => any2array(hiera(\'rabbitmq_node_names\',\\\n \\ undef)),\\n \'port\' => sprintf(\'%s\',hiera(\'nova::rabbit_port\', \'5672\')\\\n \\ ),\\n \'username\' => hiera(\'nova::rabbit_userid\', \'guest\'),\\n \'password\'\\\n \\ => hiera(\'nova::rabbit_password\'),\\n \'ssl\' => sprintf(\'%s\',\\\n \\ bool2num(str2bool(hiera(\'nova::rabbit_use_ssl\', \'0\'))))\\n}) oslo::messaging::default\\\n \\ { \'nova_config\':\\n transport_url => $transport_url\\n}\\n", dest: /root/nova-api_upgrade_manifest.pp,\n mode: 384}\n name: Create puppet manifest to set transport_url in nova.conf\n - {changed_when: puppet_apply_nova_api_upgrade.rc == 2, command: \'puppet apply\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules\n --detailed-exitcodes /root/nova-api_upgrade_manifest.pp\', failed_when: \'puppet_apply_nova_api_upgrade.rc\n not in [0,2]\', name: Run puppet apply to set tranport_url in nova.conf,\n register: puppet_apply_nova_api_upgrade}\n - {name: Setup cell_v2 (map cell0), shell: \'nova-manage cell_v2 map_cell0 --database_connection=mysql+pymysql://nova:6BkAm2KjhQcBQjbmFfcxWwJUq@172.17.1.10/nova_cell0\'}\n - {changed_when: nova_api_create_cell.rc == 0, failed_when: \'nova_api_create_cell.rc\n not in [0,2]\', name: Setup cell_v2 (create default cell), register: nova_api_create_cell,\n shell: \'nova-manage cell_v2 create_cell --name=\'\'default\'\' --database_connection=$(hiera\n nova::database_connection)\'}\n - {async: 300, command: nova-manage db sync, name: Setup cell_v2 (sync nova/cell\n DB), poll: 10}\n - {name: Setup cell_v2 (get cell uuid), register: nova_api_cell_uuid, shell: \'nova-manage\n cell_v2 list_cells | sed -e \'\'1,3d\'\' -e \'\'$d\'\' | awk -F \'\' *| *\'\' \'\'$2 ==\n "default" {print $4}\'\'\'}\n - {command: \'nova-manage cell_v2 discover_hosts --cell_uuid {{nova_api_cell_uuid.stdout}}\n --verbose\', name: Setup cell_v2 (migrate hosts)}\n - {command: \'nova-manage cell_v2 map_instances --cell_uuid {{nova_api_cell_uuid.stdout}}\',\n name: Setup cell_v2 (migrate instances)}\n when: [step|int == 7, release == \'ocata\', is_bootstrap_node|bool]\n - command: nova-manage api_db sync\n name: Sync nova_api DB\n when: [step|int == 8, is_bootstrap_node|bool]\n - command: nova-manage db online_data_migrations\n name: Online data migration for nova\n when: [step|int == 8, is_bootstrap_node|bool]\n - command: systemctl is-enabled --quiet openstack-nova-conductor\n ignore_errors: true\n name: Check if nova_conductor is deployed\n register: nova_conductor_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact nova_conductor_enabled\n set_fact: {nova_conductor_enabled: \'{{ nova_conductor_enabled_result.rc == 0\n }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable nova_conductor service\n service: name=openstack-nova-conductor state=stopped\n when: [step|int == 1, release == \'ocata\', nova_conductor_enabled|bool]\n - command: systemctl is-active --quiet openstack-nova-consoleauth\n ignore_errors: true\n name: Check if nova_consoleauth is deployed\n register: nova_consoleauth_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact nova_consoleauth_enabled\n set_fact: {nova_consoleauth_enabled: \'{{ nova_consoleauth_enabled_result.rc\n == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable nova-consoleauth service\n service: name=openstack-nova-consoleauth state=stopped\n when: [step|int == 1, release == \'ocata\', nova_consoleauth_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-nova-api\n ignore_errors: true\n name: Check if nova_api_metadata is deployed\n register: nova_metadata_enabled_result\n tags: common\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact nova_metadata_enabled\n set_fact: {nova_metadata_enabled: \'{{ nova_metadata_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable nova_api service\n service: name=openstack-nova-api state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', nova_metadata_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-nova-scheduler\n ignore_errors: true\n name: Check if nova_scheduler is deployed\n register: nova_scheduler_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact nova_scheduler_enabled\n set_fact: {nova_scheduler_enabled: \'{{ nova_scheduler_enabled_result.rc == 0\n }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable nova-scheduler service\n service: name=openstack-nova-scheduler state=stopped\n when: [step|int == 1, release == \'ocata\', nova_scheduler_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-nova-novncproxy\n ignore_errors: true\n name: Check if nova vncproxy is deployed\n register: nova_vncproxy_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact nova_vncproxy_enabled\n set_fact: {nova_vncproxy_enabled: \'{{ nova_vncproxy_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable nova-novncproxy service\n service: name=openstack-nova-novncproxy state=stopped\n when: [step|int == 1, release == \'ocata\', nova_vncproxy_enabled|bool]\n - ignore_errors: true\n name: Check cluster resource status of rabbitmq\n pacemaker_resource: {check_mode: false, resource: rabbitmq, state: show}\n register: rabbitmq_res_result\n when: [step|int == 0, release == \'ocata\', is_bootstrap_node|bool]\n - name: Set fact rabbitmq_res\n set_fact: {rabbitmq_res: \'{{ rabbitmq_res_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\', is_bootstrap_node|bool]\n - name: Disable the rabitmq cluster resource\n pacemaker_resource: {resource: rabbitmq, state: disable, wait_for_resource: true}\n register: rabbitmq_output\n retries: 5\n until: rabbitmq_output.rc == 0\n when: [step|int == 2, release == \'ocata\', is_bootstrap_node|bool, rabbitmq_res|bool]\n - ignore_errors: true\n name: Check cluster resource status of redis\n pacemaker_resource: {check_mode: false, resource: redis, state: show}\n register: redis_res_result\n when: [step|int == 0, release == \'ocata\', is_bootstrap_node|bool]\n - name: Set fact redis_res\n set_fact: {redis_res: \'{{ redis_res_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\', is_bootstrap_node|bool]\n - name: Disable the redis cluster resource\n pacemaker_resource: {resource: redis, state: disable, wait_for_resource: true}\n register: redis_output\n retries: 5\n until: redis_output.rc == 0\n when: [step|int == 2, release == \'ocata\', is_bootstrap_node|bool, redis_res|bool]\n - command: systemctl is-enabled --quiet "{{ item }}"\n ignore_errors: true\n name: Check if swift-proxy or swift-object-expirer are deployed\n register: swift_proxy_services_enabled_result\n when: [step|int == 0, release == \'ocata\']\n with_items: [openstack-swift-proxy, openstack-swift-object-expirer]\n - name: Set fact swift_proxy_services_enabled\n set_fact: {swift_proxy_services_enabled: \'{{ swift_proxy_services_enabled_result\n }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop swift-proxy and swift-object-expirer services\n service: name={{ item.item }} state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', item.rc == 0]\n with_items: \'{{ swift_proxy_services_enabled.results }}\'\n - command: systemctl is-enabled --quiet "{{ item }}"\n ignore_errors: true\n name: Check if swift storage services are deployed\n register: swift_services_enabled_result\n when: [step|int == 0, release == \'ocata\']\n with_items: [openstack-swift-account-auditor, openstack-swift-account-reaper,\n openstack-swift-account-replicator, openstack-swift-account, openstack-swift-container-auditor,\n openstack-swift-container-replicator, openstack-swift-container-updater, openstack-swift-container,\n openstack-swift-object-auditor, openstack-swift-object-replicator, openstack-swift-object-updater,\n openstack-swift-object]\n - name: Set fact swift_services_enabled\n set_fact: {swift_services_enabled: \'{{ swift_services_enabled_result }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop swift storage services\n service: name={{ item.item }} state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', item.rc == 0]\n with_items: \'{{ swift_services_enabled.results }}\'\n - name: Register repo type and args\n set_fact:\n fast_forward_repo_args:\n tripleo_repos: {ocata: -b ocata current, pike: -b pike current, queens: -b\n queens current}\n fast_forward_repo_type: custom-script\n when: step|int == 3\n - debug: {msg: \'fast_forward_repo_type: {{ fast_forward_repo_type }} fast_forward_repo_args:\n {{ fast_forward_repo_args }}\'}\n when: step|int == 3\n - block:\n - git: {dest: /home/stack/tripleo-repos/, repo: \'https://github.com/openstack/tripleo-repos.git\'}\n name: clone tripleo-repos\n - args: {chdir: /home/stack/tripleo-repos/}\n command: python setup.py install\n name: install tripleo-repos\n - {command: \'tripleo-repos {{ fast_forward_repo_args.tripleo_repos[release]\n }}\', name: Enable tripleo-repos}\n when: [step|int == 3, ffu_packages_apply|bool, fast_forward_repo_type == \'tripleo-repos\']\n - block:\n - copy: {content: "#!/bin/bash\\nset -e\\necho \\"If you use FastForwardRepoType\\\n \\ \'custom-script\' you have to provide the upgrade repo script content.\\"\\\n \\necho \\"It will be installed as /root/ffu_upgrade_repo.sh on the node\\"\\\n \\necho \\"and passed the upstream name (ocata, pike, queens) of the release\\\n \\ as first argument\\"\\ncase $1 in\\n ocata)\\n subscription-manager\\\n \\ repos --disable=rhel-7-server-openstack-10-rpms\\n subscription-manager\\\n \\ repos --enable=rhel-7-server-openstack-11-rpms\\n ;;\\n pike)\\n \\\n \\ subscription-manager repos --disable=rhel-7-server-openstack-11-rpms\\n\\\n \\ subscription-manager repos --enable=rhel-7-server-openstack-12-rpms\\n\\\n \\ ;;\\n queens)\\n subscription-manager repos --disable=rhel-7-server-openstack-12-rpms\\n\\\n \\ subscription-manager repos --enable=rhel-7-server-openstack-13-rpms\\n\\\n \\ ;;\\n *)\\n echo \\"unknown release $1\\" >&2\\n exit 1\\nesac\\n",\n dest: /root/ffu_update_repo.sh, mode: 448}\n name: Create custom Script for upgrading repo.\n - {name: Execute custom script for upgrading repo., shell: \'/root/ffu_update_repo.sh\n {{release}}\'}\n when: [step|int == 3, ffu_packages_apply|bool, fast_forward_repo_type == \'custom-script\']\n role_data_global_config_settings: {}\n role_data_host_prep_tasks:\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/aodh, /var/log/containers/httpd/aodh-api]\n - copy: {content: \'Log files from aodh containers can be found under\n\n /var/log/containers/aodh and /var/log/containers/httpd/aodh-api.\n\n \', dest: /var/log/aodh/readme.txt}\n ignore_errors: true\n name: aodh logs readme\n - file: {path: /var/log/containers/aodh, state: directory}\n name: create persistent logs directory\n - file: {path: /var/log/containers/ceilometer, state: directory}\n name: create persistent logs directory\n - copy: {content: \'Log files from ceilometer containers can be found under\n\n /var/log/containers/ceilometer.\n\n \', dest: /var/log/ceilometer/readme.txt}\n ignore_errors: true\n name: ceilometer logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/cinder, /var/log/containers/httpd/cinder-api]\n - copy: {content: \'Log files from cinder containers can be found under\n\n /var/log/containers/cinder and /var/log/containers/httpd/cinder-api.\n\n \', dest: /var/log/cinder/readme.txt}\n ignore_errors: true\n name: cinder logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/var/log/containers/cinder]\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/var/log/containers/cinder, /var/lib/cinder]\n - file: {path: /etc/ceph, state: directory}\n name: ensure ceph configurations exist\n - name: cinder_enable_iscsi_backend fact\n set_fact: {cinder_enable_iscsi_backend: true}\n - args: {creates: /var/lib/cinder/cinder-volumes}\n command: dd if=/dev/zero of=/var/lib/cinder/cinder-volumes bs=1 count=0 seek=16384M\n name: cinder create LVM volume group dd\n when: cinder_enable_iscsi_backend\n - args: {creates: /dev/loop2, executable: /bin/bash}\n name: cinder create LVM volume group\n shell: "if ! losetup /dev/loop2; then\\n losetup /dev/loop2 /var/lib/cinder/cinder-volumes\\n\\\n fi\\nif ! pvdisplay | grep cinder-volumes; then\\n pvcreate /dev/loop2\\nfi\\n\\\n if ! vgdisplay | grep cinder-volumes; then\\n vgcreate cinder-volumes /dev/loop2\\n\\\n fi\\n"\n when: cinder_enable_iscsi_backend\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/glance]\n - copy: {content: \'Log files from glance containers can be found under\n\n /var/log/containers/glance.\n\n \', dest: /var/log/glance/readme.txt}\n ignore_errors: true\n name: glance logs readme\n - block:\n - name: null\n set_fact: {remote_file_path: /etc/glance/glance-metadata-file.conf}\n - file: {path: \'{{ remote_file_path }}\', state: touch}\n name: null\n - {register: file_path, stat: \'path="{{ remote_file_path }}"\'}\n - copy:\n content: {mount_point: /var/lib/glance/images, share_location: \'{{item.NETAPP_SHARE}}\',\n type: nfs}\n dest: \'{{ remote_file_path }}\'\n when: [file_path.stat.exists == true]\n with_items:\n - {NETAPP_SHARE: \'\'}\n - mount: name=/var/lib/glance/images src="{{item.NETAPP_SHARE}}" fstype=nfs4\n opts="{{item.NFS_OPTIONS}}" state=mounted\n name: null\n with_items:\n - {NETAPP_SHARE: \'\', NFS_OPTIONS: \'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0\'}\n name: Mount Netapp NFS\n vars: {netapp_nfs_backend_enable: false}\n when: netapp_nfs_backend_enable\n - mount: name=/var/lib/glance/images src="{{item.NFS_SHARE}}" fstype=nfs4 opts="{{item.NFS_OPTIONS}}"\n state=mounted\n name: Mount NFS on host\n vars: {nfs_backend_enable: false}\n when: [nfs_backend_enable]\n with_items:\n - {NFS_OPTIONS: \'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0\',\n NFS_SHARE: \'\'}\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/gnocchi, /var/log/containers/httpd/gnocchi-api]\n - copy: {content: \'Log files from gnocchi containers can be found under\n\n /var/log/containers/gnocchi and /var/log/containers/httpd/gnocchi-api.\n\n \', dest: /var/log/gnocchi/readme.txt}\n ignore_errors: true\n name: gnocchi logs readme\n - file: {path: /var/log/containers/gnocchi, state: directory}\n name: create persistent logs directory\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/var/lib/haproxy]\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/heat, /var/log/containers/httpd/heat-api]\n - copy: {content: \'Log files from heat containers can be found under\n\n /var/log/containers/heat and /var/log/containers/httpd/heat-api*.\n\n \', dest: /var/log/heat/readme.txt}\n ignore_errors: true\n name: heat logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/heat, /var/log/containers/httpd/heat-api-cfn]\n - file: {path: /var/log/containers/heat, state: directory}\n name: create persistent logs directory\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/horizon, /var/log/containers/httpd/horizon]\n - copy: {content: \'Log files from horizon containers can be found under\n\n /var/log/containers/horizon and /var/log/containers/httpd/horizon.\n\n \', dest: /var/log/horizon/readme.txt}\n ignore_errors: true\n name: horizon logs readme\n - {name: stat /lib/systemd/system/iscsid.socket, register: stat_iscsid_socket,\n stat: path=/lib/systemd/system/iscsid.socket}\n - {name: Stop and disable iscsid.socket service, service: name=iscsid.socket state=stopped\n enabled=no, when: stat_iscsid_socket.stat.exists}\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/keystone, /var/log/containers/httpd/keystone]\n - copy: {content: \'Log files from keystone containers can be found under\n\n /var/log/containers/keystone and /var/log/containers/httpd/keystone.\n\n \', dest: /var/log/keystone/readme.txt}\n ignore_errors: true\n name: keystone logs readme\n - copy: {content: \'Memcached container logs to stdout/stderr only.\n\n \', dest: /var/log/memcached-readme.txt}\n ignore_errors: true\n name: memcached logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/var/log/containers/mysql, /var/lib/mysql]\n - copy: {content: \'Log files from mysql containers can be found under\n\n /var/log/containers/mysql.\n\n \', dest: /var/log/mariadb/readme.txt}\n ignore_errors: true\n name: mysql logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/neutron, /var/log/containers/httpd/neutron-api]\n - copy: {content: \'Log files from neutron containers can be found under\n\n /var/log/containers/neutron and /var/log/containers/httpd/neutron-api.\n\n \', dest: /var/log/neutron/readme.txt}\n ignore_errors: true\n name: neutron logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/neutron]\n - file: {path: /var/lib/neutron, state: directory}\n name: create /var/lib/neutron\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/nova, /var/log/containers/httpd/nova-api]\n - copy: {content: \'Log files from nova containers can be found under\n\n /var/log/containers/nova and /var/log/containers/httpd/nova-*.\n\n \', dest: /var/log/nova/readme.txt}\n ignore_errors: true\n name: nova logs readme\n - file: {path: /var/log/containers/nova, state: directory}\n name: create persistent logs directory\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/nova, /var/log/containers/httpd/nova-placement]\n - file: {path: /var/lib/opendaylight/data/cache, state: absent}\n name: Delete cache folder\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/var/lib/opendaylight/snapshots, /var/lib/opendaylight/journal,\n /var/lib/opendaylight/data, /var/log/opendaylight, /var/log/containers/opendaylight]\n - copy: {content: \'Logs from opendaylight container can be found at /var/log/containers/opendaylight/karaf.log\n\n \', dest: /var/log/opendaylight/readme.txt}\n ignore_errors: true\n name: opendaylight logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/panko, /var/log/containers/httpd/panko-api]\n - copy: {content: \'Log files from panko containers can be found under\n\n /var/log/containers/panko and /var/log/containers/httpd/panko-api.\n\n \', dest: /var/log/panko/readme.txt}\n ignore_errors: true\n name: panko logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/var/lib/rabbitmq, /var/log/containers/rabbitmq]\n - copy: {content: \'Log files from rabbitmq containers can be found under\n\n /var/log/containers/rabbitmq.\n\n \', dest: /var/log/rabbitmq/readme.txt}\n ignore_errors: true\n name: rabbitmq logs readme\n - {name: stop the Erlang port mapper on the host and make sure it cannot bind\n to the port used by container, shell: \'echo \'\'export ERL_EPMD_ADDRESS=127.0.0.1\'\'\n > /etc/rabbitmq/rabbitmq-env.conf\n\n echo \'\'export ERL_EPMD_PORT=4370\'\' >> /etc/rabbitmq/rabbitmq-env.conf\n\n for pid in $(pgrep epmd --ns 1 --nslist pid); do kill $pid; done\n\n \'}\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/var/lib/redis, /var/log/containers/redis, /var/run/redis]\n - copy: {content: \'Log files from redis containers can be found under\n\n /var/log/containers/redis.\n\n \', dest: /var/log/redis/readme.txt}\n ignore_errors: true\n name: redis logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/srv/node, /var/log/swift]\n - file: {dest: /var/log/containers/swift, src: /var/log/swift, state: link}\n name: Create swift logging symlink\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/srv/node, /var/log/swift, /var/log/containers]\n - name: Set swift_use_local_disks fact\n set_fact: {swift_use_local_disks: true}\n - file: {path: /srv/node/d1, state: directory}\n name: Create Swift d1 directory if needed\n when: swift_use_local_disks\n - copy: {content: \'Log files from swift containers can be found under\n\n /var/log/containers/swift and /var/log/containers/httpd/swift-*.\n\n \', dest: /var/log/swift/readme.txt}\n ignore_errors: true\n name: swift logs readme\n - filesystem: {dev: \'/dev/{{ item }}\', fstype: xfs, opts: -f -i size=1024}\n name: Format SwiftRawDisks\n with_items:\n - []\n - mount: {fstype: xfs, name: \'/srv/node/{{ item }}\', opts: noatime, src: \'/dev/{{\n item }}\', state: mounted}\n name: Mount devices defined in SwiftRawDisks\n with_items:\n - []\n role_data_kolla_config:\n /var/lib/kolla/config_files/aodh_api.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'aodh:aodh\', path: /var/log/aodh, recurse: true}\n /var/lib/kolla/config_files/aodh_evaluator.json:\n command: /usr/bin/aodh-evaluator\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'aodh:aodh\', path: /var/log/aodh, recurse: true}\n /var/lib/kolla/config_files/aodh_listener.json:\n command: /usr/bin/aodh-listener\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'aodh:aodh\', path: /var/log/aodh, recurse: true}\n /var/lib/kolla/config_files/aodh_notifier.json:\n command: /usr/bin/aodh-notifier\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'aodh:aodh\', path: /var/log/aodh, recurse: true}\n /var/lib/kolla/config_files/ceilometer_agent_central.json:\n command: /usr/bin/ceilometer-polling --polling-namespaces central --logfile\n /var/log/ceilometer/central.log\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/ceilometer_agent_notification.json:\n command: /usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-panko/*}\n permissions:\n - {owner: \'root:ceilometer\', path: /etc/panko, recurse: true}\n /var/lib/kolla/config_files/cinder_api.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'cinder:cinder\', path: /var/log/cinder, recurse: true}\n /var/lib/kolla/config_files/cinder_api_cron.json:\n command: /usr/sbin/crond -n\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'cinder:cinder\', path: /var/log/cinder, recurse: true}\n /var/lib/kolla/config_files/cinder_scheduler.json:\n command: /usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf\n --config-file /etc/cinder/cinder.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'cinder:cinder\', path: /var/log/cinder, recurse: true}\n /var/lib/kolla/config_files/cinder_volume.json:\n command: /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf\n --config-file /etc/cinder/cinder.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /etc/ceph/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-ceph/}\n - {dest: /etc/iscsi/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-iscsid/*}\n permissions:\n - {owner: \'cinder:cinder\', path: /var/log/cinder, recurse: true}\n /var/lib/kolla/config_files/clustercheck.json:\n command: /usr/sbin/xinetd -dontfork\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/glance_api.json:\n command: /usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf\n --config-file /etc/glance/glance-api.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /etc/ceph/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-ceph/}\n permissions:\n - {owner: \'glance:glance\', path: /var/lib/glance, recurse: true}\n - {owner: \'glance:glance\', path: /etc/ceph/ceph.client.openstack.keyring,\n perm: \'0600\'}\n /var/lib/kolla/config_files/glance_api_tls_proxy.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/gnocchi_api.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /etc/ceph/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-ceph/}\n permissions:\n - {owner: \'gnocchi:gnocchi\', path: /var/log/gnocchi, recurse: true}\n - {owner: \'gnocchi:gnocchi\', path: /etc/ceph/ceph.client.openstack.keyring,\n perm: \'0600\'}\n /var/lib/kolla/config_files/gnocchi_db_sync.json:\n command: /usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade\n --sacks-number=128\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /etc/ceph/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-ceph/}\n permissions:\n - {owner: \'gnocchi:gnocchi\', path: /var/log/gnocchi, recurse: true}\n - {owner: \'gnocchi:gnocchi\', path: /etc/ceph/ceph.client.openstack.keyring,\n perm: \'0600\'}\n /var/lib/kolla/config_files/gnocchi_metricd.json:\n command: /usr/bin/gnocchi-metricd\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /etc/ceph/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-ceph/}\n permissions:\n - {owner: \'gnocchi:gnocchi\', path: /var/log/gnocchi, recurse: true}\n - {owner: \'gnocchi:gnocchi\', path: /etc/ceph/ceph.client.openstack.keyring,\n perm: \'0600\'}\n /var/lib/kolla/config_files/gnocchi_statsd.json:\n command: /usr/bin/gnocchi-statsd\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /etc/ceph/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-ceph/}\n permissions:\n - {owner: \'gnocchi:gnocchi\', path: /var/log/gnocchi, recurse: true}\n - {owner: \'gnocchi:gnocchi\', path: /etc/ceph/ceph.client.openstack.keyring,\n perm: \'0600\'}\n /var/lib/kolla/config_files/haproxy.json:\n command: /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg\n config_files:\n - {dest: /, merge: true, optional: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /, merge: true, optional: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-tls/*}\n permissions:\n - {owner: \'haproxy:haproxy\', path: /var/lib/haproxy, recurse: true}\n - {optional: true, owner: \'haproxy:haproxy\', path: /etc/pki/tls/certs/haproxy/*,\n perm: \'0600\'}\n - {optional: true, owner: \'haproxy:haproxy\', path: /etc/pki/tls/private/haproxy/*,\n perm: \'0600\'}\n /var/lib/kolla/config_files/heat_api.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'heat:heat\', path: /var/log/heat, recurse: true}\n /var/lib/kolla/config_files/heat_api_cfn.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'heat:heat\', path: /var/log/heat, recurse: true}\n /var/lib/kolla/config_files/heat_api_cron.json:\n command: /usr/sbin/crond -n\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'heat:heat\', path: /var/log/heat, recurse: true}\n /var/lib/kolla/config_files/heat_engine.json:\n command: \'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf\n --config-file /etc/heat/heat.conf \'\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'heat:heat\', path: /var/log/heat, recurse: true}\n /var/lib/kolla/config_files/horizon.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'apache:apache\', path: /var/log/horizon/, recurse: true}\n - {owner: \'apache:apache\', path: /etc/openstack-dashboard/, recurse: true}\n - {owner: \'apache:apache\', path: /usr/share/openstack-dashboard/openstack_dashboard/local/,\n recurse: false}\n - {owner: \'apache:apache\', path: /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/,\n recurse: false}\n /var/lib/kolla/config_files/iscsid.json:\n command: /usr/sbin/iscsid -f\n config_files:\n - {dest: /etc/iscsi/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-iscsid/*}\n /var/lib/kolla/config_files/keystone.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/keystone_cron.json:\n command: /usr/sbin/crond -n\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'keystone:keystone\', path: /var/log/keystone, recurse: true}\n /var/lib/kolla/config_files/logrotate-crond.json:\n command: /usr/sbin/crond -s -n\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/mysql.json:\n command: /usr/sbin/pacemaker_remoted\n config_files:\n - {dest: /etc/libqb/force-filesystem-sockets, owner: root, perm: \'0644\', source: /dev/null}\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /, merge: true, optional: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-tls/*}\n permissions:\n - {owner: \'mysql:mysql\', path: /var/log/mysql, recurse: true}\n - {optional: true, owner: \'mysql:mysql\', path: /etc/pki/tls/certs/mysql.crt,\n perm: \'0600\'}\n - {optional: true, owner: \'mysql:mysql\', path: /etc/pki/tls/private/mysql.key,\n perm: \'0600\'}\n /var/lib/kolla/config_files/neutron_api.json:\n command: /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf\n --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf\n --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common\n --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'neutron:neutron\', path: /var/log/neutron, recurse: true}\n /var/lib/kolla/config_files/neutron_dhcp.json:\n command: /usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf\n --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini\n --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent\n --log-file=/var/log/neutron/dhcp-agent.log\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'neutron:neutron\', path: /var/log/neutron, recurse: true}\n - {owner: \'neutron:neutron\', path: /var/lib/neutron, recurse: true}\n - {owner: \'neutron:neutron\', path: /etc/pki/tls/certs/neutron.crt}\n - {owner: \'neutron:neutron\', path: /etc/pki/tls/private/neutron.key}\n /var/lib/kolla/config_files/neutron_metadata_agent.json:\n command: /usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf\n --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini\n --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent\n --log-file=/var/log/neutron/metadata-agent.log\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'neutron:neutron\', path: /var/log/neutron, recurse: true}\n - {owner: \'neutron:neutron\', path: /var/lib/neutron, recurse: true}\n /var/lib/kolla/config_files/neutron_server_tls_proxy.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/nova_api.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'nova:nova\', path: /var/log/nova, recurse: true}\n /var/lib/kolla/config_files/nova_api_cron.json:\n command: /usr/sbin/crond -n\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'nova:nova\', path: /var/log/nova, recurse: true}\n /var/lib/kolla/config_files/nova_conductor.json:\n command: \'/usr/bin/nova-conductor \'\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'nova:nova\', path: /var/log/nova, recurse: true}\n /var/lib/kolla/config_files/nova_consoleauth.json:\n command: \'/usr/bin/nova-consoleauth \'\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'nova:nova\', path: /var/log/nova, recurse: true}\n /var/lib/kolla/config_files/nova_metadata.json:\n command: \'/usr/bin/nova-api-metadata \'\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'nova:nova\', path: /var/log/nova, recurse: true}\n /var/lib/kolla/config_files/nova_placement.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'nova:nova\', path: /var/log/nova, recurse: true}\n /var/lib/kolla/config_files/nova_scheduler.json:\n command: \'/usr/bin/nova-scheduler \'\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'nova:nova\', path: /var/log/nova, recurse: true}\n /var/lib/kolla/config_files/nova_vnc_proxy.json:\n command: \'/usr/bin/nova-novncproxy --web /usr/share/novnc/ \'\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'nova:nova\', path: /var/log/nova, recurse: true}\n /var/lib/kolla/config_files/opendaylight_api.json:\n command: /opt/opendaylight/bin/karaf server\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'odl:odl\', path: /opt/opendaylight, recurse: true}\n /var/lib/kolla/config_files/panko_api.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'panko:panko\', path: /var/log/panko, recurse: true}\n /var/lib/kolla/config_files/rabbitmq.json:\n command: /usr/sbin/pacemaker_remoted\n config_files:\n - {dest: /etc/libqb/force-filesystem-sockets, owner: root, perm: \'0644\', source: /dev/null}\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /, merge: true, optional: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-tls/*}\n permissions:\n - {owner: \'rabbitmq:rabbitmq\', path: /var/lib/rabbitmq, recurse: true}\n - {owner: \'rabbitmq:rabbitmq\', path: /var/log/rabbitmq, recurse: true}\n - {optional: true, owner: \'rabbitmq:rabbitmq\', path: /etc/pki/tls/certs/rabbitmq.crt,\n perm: \'0600\'}\n - {optional: true, owner: \'rabbitmq:rabbitmq\', path: /etc/pki/tls/private/rabbitmq.key,\n perm: \'0600\'}\n /var/lib/kolla/config_files/redis.json:\n command: /usr/sbin/pacemaker_remoted\n config_files:\n - {dest: /etc/libqb/force-filesystem-sockets, owner: root, perm: \'0644\', source: /dev/null}\n - {dest: /, merge: true, optional: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /, merge: true, optional: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-tls/*}\n permissions:\n - {owner: \'redis:redis\', path: /var/run/redis, recurse: true}\n - {owner: \'redis:redis\', path: /var/lib/redis, recurse: true}\n - {owner: \'redis:redis\', path: /var/log/redis, recurse: true}\n - {optional: true, owner: \'redis:redis\', path: /etc/pki/tls/certs/redis.crt,\n perm: \'0600\'}\n - {optional: true, owner: \'redis:redis\', path: /etc/pki/tls/private/redis.key,\n perm: \'0600\'}\n /var/lib/kolla/config_files/redis_tls_proxy.json:\n command: stunnel /etc/stunnel/stunnel.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_account_auditor.json:\n command: /usr/bin/swift-account-auditor /etc/swift/account-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_account_reaper.json:\n command: /usr/bin/swift-account-reaper /etc/swift/account-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_account_replicator.json:\n command: /usr/bin/swift-account-replicator /etc/swift/account-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_account_server.json:\n command: /usr/bin/swift-account-server /etc/swift/account-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_container_auditor.json:\n command: /usr/bin/swift-container-auditor /etc/swift/container-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_container_replicator.json:\n command: /usr/bin/swift-container-replicator /etc/swift/container-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_container_server.json:\n command: /usr/bin/swift-container-server /etc/swift/container-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_container_updater.json:\n command: /usr/bin/swift-container-updater /etc/swift/container-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_object_auditor.json:\n command: /usr/bin/swift-object-auditor /etc/swift/object-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_object_expirer.json:\n command: /usr/bin/swift-object-expirer /etc/swift/object-expirer.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_object_replicator.json:\n command: /usr/bin/swift-object-replicator /etc/swift/object-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_object_server.json:\n command: /usr/bin/swift-object-server /etc/swift/object-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'swift:swift\', path: /var/cache/swift, recurse: true}\n /var/lib/kolla/config_files/swift_object_updater.json:\n command: /usr/bin/swift-object-updater /etc/swift/object-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_proxy.json:\n command: /usr/bin/swift-proxy-server /etc/swift/proxy-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_proxy_tls_proxy.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_rsync.json:\n command: /usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n role_data_logging_groups: [root]\n role_data_logging_sources: []\n role_data_merged_config_settings:\n aodh::api::enable_proxy_headers_parsing: true\n aodh::api::gnocchi_external_project_owner: service\n aodh::api::host: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n aodh::api::service_name: httpd\n aodh::auth::auth_password: CzBTgJs3cf3DFGHBpK6umAgMj\n aodh::auth::auth_region: regionOne\n aodh::auth::auth_tenant_name: service\n aodh::auth::auth_url: http://172.17.1.10:5000\n aodh::db::database_connection: mysql+pymysql://aodh:CzBTgJs3cf3DFGHBpK6umAgMj@172.17.1.10/aodh?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n aodh::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n aodh::db::mysql::dbname: aodh\n aodh::db::mysql::host: 172.17.1.10\n aodh::db::mysql::password: CzBTgJs3cf3DFGHBpK6umAgMj\n aodh::db::mysql::user: aodh\n aodh::debug: true\n aodh::keystone::auth::admin_url: http://172.17.1.10:8042\n aodh::keystone::auth::internal_url: http://172.17.1.10:8042\n aodh::keystone::auth::password: CzBTgJs3cf3DFGHBpK6umAgMj\n aodh::keystone::auth::public_url: http://10.0.0.106:8042\n aodh::keystone::auth::region: regionOne\n aodh::keystone::auth::tenant: service\n aodh::keystone::authtoken::auth_uri: http://172.17.1.10:5000\n aodh::keystone::authtoken::auth_url: http://172.17.1.10:5000\n aodh::keystone::authtoken::password: CzBTgJs3cf3DFGHBpK6umAgMj\n aodh::keystone::authtoken::project_domain_name: Default\n aodh::keystone::authtoken::project_name: service\n aodh::keystone::authtoken::user_domain_name: Default\n aodh::notification_driver: messagingv2\n aodh::policy::policies: {}\n aodh::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n aodh::rabbit_port: 5672\n aodh::rabbit_use_ssl: \'False\'\n aodh::rabbit_userid: guest\n aodh::wsgi::apache::bind_host: internal_api\n aodh::wsgi::apache::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n aodh::wsgi::apache::ssl: false\n aodh::wsgi::apache::wsgi_process_display_name: aodh_wsgi\n aodh_redis_password: jv8TQJ7wGC7M7e6ez2GNPfke7\n apache::default_vhost: false\n apache::ip: internal_api\n apache::mod::prefork::maxclients: 256\n apache::mod::prefork::serverlimit: 256\n apache::mod::remoteip::proxy_ips: [\'%{hiera(\'\'apache_remote_proxy_ips_network\'\')}\']\n apache::server_signature: \'Off\'\n apache::server_tokens: Prod\n apache_remote_proxy_ips_network: internal_api_subnet\n ceilometer::agent::auth::auth_endpoint_type: internalURL\n ceilometer::agent::auth::auth_password: ZUMGXYGsUAsWVRjeZaJfeAv9y\n ceilometer::agent::auth::auth_project_domain_name: Default\n ceilometer::agent::auth::auth_region: regionOne\n ceilometer::agent::auth::auth_tenant_name: service\n ceilometer::agent::auth::auth_url: http://172.17.1.10:5000\n ceilometer::agent::auth::auth_user_domain_name: Default\n ceilometer::agent::notification::event_pipeline_publishers: [\'gnocchi://\', \'panko://\']\n ceilometer::agent::notification::manage_event_pipeline: true\n ceilometer::agent::notification::manage_pipeline: false\n ceilometer::agent::notification::pipeline_publishers: [\'gnocchi://\']\n ceilometer::agent::polling::manage_polling: false\n ceilometer::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n ceilometer::db::mysql::dbname: ceilometer\n ceilometer::db::mysql::host: 172.17.1.10\n ceilometer::db::mysql::password: ZUMGXYGsUAsWVRjeZaJfeAv9y\n ceilometer::db::mysql::user: ceilometer\n ceilometer::debug: true\n ceilometer::dispatcher::gnocchi::archive_policy: low\n ceilometer::dispatcher::gnocchi::filter_project: service\n ceilometer::dispatcher::gnocchi::resources_definition_file: gnocchi_resources.yaml\n ceilometer::dispatcher::gnocchi::url: http://172.17.1.10:8041\n ceilometer::host: \'%{::fqdn}\'\n ceilometer::keystone::auth::admin_url: http://172.17.1.10:8777\n ceilometer::keystone::auth::configure_endpoint: false\n ceilometer::keystone::auth::internal_url: http://172.17.1.10:8777\n ceilometer::keystone::auth::password: ZUMGXYGsUAsWVRjeZaJfeAv9y\n ceilometer::keystone::auth::public_url: http://10.0.0.106:8777\n ceilometer::keystone::auth::region: regionOne\n ceilometer::keystone::auth::tenant: service\n ceilometer::keystone::authtoken::auth_uri: http://172.17.1.10:5000\n ceilometer::keystone::authtoken::auth_url: http://172.17.1.10:5000\n ceilometer::keystone::authtoken::password: ZUMGXYGsUAsWVRjeZaJfeAv9y\n ceilometer::keystone::authtoken::project_domain_name: Default\n ceilometer::keystone::authtoken::project_name: service\n ceilometer::keystone::authtoken::user_domain_name: Default\n ceilometer::notification_driver: messagingv2\n ceilometer::rabbit_heartbeat_timeout_threshold: 60\n ceilometer::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n ceilometer::rabbit_port: 5672\n ceilometer::rabbit_use_ssl: \'False\'\n ceilometer::rabbit_userid: guest\n ceilometer::snmpd_readonly_user_password: e0e6f3b1f8575fd51ee080d6b2724feef235ed7e\n ceilometer::snmpd_readonly_username: ro_snmp_user\n ceilometer::telemetry_secret: ey9QkWYUbQMUv7hUXn2xzTrvM\n ceilometer_auth_enabled: true\n ceilometer_redis_password: jv8TQJ7wGC7M7e6ez2GNPfke7\n central_namespace: true\n cinder::api::bind_host: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n cinder::api::enable_proxy_headers_parsing: true\n cinder::api::nova_catalog_admin_info: compute:nova:adminURL\n cinder::api::nova_catalog_info: compute:nova:internalURL\n cinder::api::service_name: httpd\n cinder::backend_host: hostgroup\n cinder::ceilometer::notification_driver: messagingv2\n cinder::config:\n DEFAULT/swift_catalog_info: {value: \'object-store:swift:internalURL\'}\n cinder::cron::db_purge::age: \'30\'\n cinder::cron::db_purge::destination: /var/log/cinder/cinder-rowsflush.log\n cinder::cron::db_purge::hour: \'0\'\n cinder::cron::db_purge::minute: \'1\'\n cinder::cron::db_purge::month: \'*\'\n cinder::cron::db_purge::monthday: \'*\'\n cinder::cron::db_purge::user: cinder\n cinder::cron::db_purge::weekday: \'*\'\n cinder::database_connection: mysql+pymysql://cinder:jBfyuGFpWc3awtCvQwFuHPFxd@172.17.1.10/cinder?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n cinder::db::database_db_max_retries: -1\n cinder::db::database_max_retries: -1\n cinder::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n cinder::db::mysql::dbname: cinder\n cinder::db::mysql::host: 172.17.1.10\n cinder::db::mysql::password: jBfyuGFpWc3awtCvQwFuHPFxd\n cinder::db::mysql::user: cinder\n cinder::debug: true\n cinder::glance::glance_api_servers: http://172.17.1.10:9292\n cinder::keystone::auth::admin_url: http://172.17.1.10:8776/v1/%(tenant_id)s\n cinder::keystone::auth::admin_url_v2: http://172.17.1.10:8776/v2/%(tenant_id)s\n cinder::keystone::auth::admin_url_v3: http://172.17.1.10:8776/v3/%(tenant_id)s\n cinder::keystone::auth::internal_url: http://172.17.1.10:8776/v1/%(tenant_id)s\n cinder::keystone::auth::internal_url_v2: http://172.17.1.10:8776/v2/%(tenant_id)s\n cinder::keystone::auth::internal_url_v3: http://172.17.1.10:8776/v3/%(tenant_id)s\n cinder::keystone::auth::password: jBfyuGFpWc3awtCvQwFuHPFxd\n cinder::keystone::auth::public_url: http://10.0.0.106:8776/v1/%(tenant_id)s\n cinder::keystone::auth::public_url_v2: http://10.0.0.106:8776/v2/%(tenant_id)s\n cinder::keystone::auth::public_url_v3: http://10.0.0.106:8776/v3/%(tenant_id)s\n cinder::keystone::auth::region: regionOne\n cinder::keystone::auth::tenant: service\n cinder::keystone::authtoken::auth_uri: http://172.17.1.10:5000\n cinder::keystone::authtoken::auth_url: http://172.17.1.10:5000\n cinder::keystone::authtoken::password: jBfyuGFpWc3awtCvQwFuHPFxd\n cinder::keystone::authtoken::project_domain_name: Default\n cinder::keystone::authtoken::project_name: service\n cinder::keystone::authtoken::user_domain_name: Default\n cinder::policy::policies: {}\n cinder::rabbit_heartbeat_timeout_threshold: 60\n cinder::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n cinder::rabbit_port: 5672\n cinder::rabbit_use_ssl: \'False\'\n cinder::rabbit_userid: guest\n cinder::scheduler::scheduler_driver: cinder.scheduler.filter_scheduler.FilterScheduler\n cinder::volume::enabled: false\n cinder::volume::manage_service: false\n cinder::wsgi::apache::bind_host: internal_api\n cinder::wsgi::apache::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n cinder::wsgi::apache::ssl: false\n cinder::wsgi::apache::workers: \'%{::os_workers}\'\n corosync_ipv6: false\n corosync_token_timeout: 10000\n enable_fencing: false\n enable_galera: true\n enable_load_balancer: true\n enable_panko_expirer: true\n glance::api::authtoken::auth_uri: http://172.17.1.10:5000\n glance::api::authtoken::auth_url: http://172.17.1.10:5000\n glance::api::authtoken::password: xKsvVHmnh7bftvWNCfuHaZNUZ\n glance::api::authtoken::project_name: service\n glance::api::bind_host: internal_api\n glance::api::bind_port: \'9292\'\n glance::api::database_connection: mysql+pymysql://glance:xKsvVHmnh7bftvWNCfuHaZNUZ@172.17.1.10/glance?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n glance::api::debug: true\n glance::api::enable_proxy_headers_parsing: true\n glance::api::enable_v1_api: false\n glance::api::enable_v2_api: true\n glance::api::enabled_import_methods: [web-download]\n glance::api::image_member_quota: 128\n glance::api::os_region_name: regionOne\n glance::api::pipeline: keystone\n glance::api::show_image_direct_url: true\n glance::api::show_multiple_locations: false\n glance::api::sync_db: false\n glance::backend::rbd::rbd_store_ceph_conf: /etc/ceph/ceph.conf\n glance::backend::rbd::rbd_store_pool: images\n glance::backend::rbd::rbd_store_user: openstack\n glance::backend::swift::swift_store_auth_address: http://172.17.1.10:5000/v3\n glance::backend::swift::swift_store_auth_version: 3\n glance::backend::swift::swift_store_create_container_on_put: true\n glance::backend::swift::swift_store_key: xKsvVHmnh7bftvWNCfuHaZNUZ\n glance::backend::swift::swift_store_user: service:glance\n glance::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n glance::db::mysql::dbname: glance\n glance::db::mysql::host: 172.17.1.10\n glance::db::mysql::password: xKsvVHmnh7bftvWNCfuHaZNUZ\n glance::db::mysql::user: glance\n glance::keystone::auth::admin_url: http://172.17.1.10:9292\n glance::keystone::auth::internal_url: http://172.17.1.10:9292\n glance::keystone::auth::password: xKsvVHmnh7bftvWNCfuHaZNUZ\n glance::keystone::auth::public_url: http://10.0.0.106:9292\n glance::keystone::auth::region: regionOne\n glance::keystone::auth::tenant: service\n glance::keystone::authtoken::project_domain_name: Default\n glance::keystone::authtoken::user_domain_name: Default\n glance::notify::rabbitmq::notification_driver: messagingv2\n glance::notify::rabbitmq::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n glance::notify::rabbitmq::rabbit_port: 5672\n glance::notify::rabbitmq::rabbit_use_ssl: \'False\'\n glance::notify::rabbitmq::rabbit_userid: guest\n glance::policy::policies: {}\n glance_backend: swift\n glance_log_file: \'\'\n glance_notifier_strategy: noop\n gnocchi::api::enable_proxy_headers_parsing: true\n gnocchi::api::enabled: true\n gnocchi::api::service_name: httpd\n gnocchi::db::database_connection: mysql+pymysql://gnocchi:EMxec4K6kuZGjZmkMwu8ZMgzM@172.17.1.10/gnocchi?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n gnocchi::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n gnocchi::db::mysql::dbname: gnocchi\n gnocchi::db::mysql::host: 172.17.1.10\n gnocchi::db::mysql::password: EMxec4K6kuZGjZmkMwu8ZMgzM\n gnocchi::db::mysql::user: gnocchi\n gnocchi::db::sync::extra_opts: \' --sacks-number 128\'\n gnocchi::debug: true\n gnocchi::keystone::auth::admin_url: http://172.17.1.10:8041\n gnocchi::keystone::auth::internal_url: http://172.17.1.10:8041\n gnocchi::keystone::auth::password: EMxec4K6kuZGjZmkMwu8ZMgzM\n gnocchi::keystone::auth::public_url: http://10.0.0.106:8041\n gnocchi::keystone::auth::region: regionOne\n gnocchi::keystone::auth::tenant: service\n gnocchi::keystone::authtoken::auth_uri: http://172.17.1.10:5000\n gnocchi::keystone::authtoken::auth_url: http://172.17.1.10:5000\n gnocchi::keystone::authtoken::password: EMxec4K6kuZGjZmkMwu8ZMgzM\n gnocchi::keystone::authtoken::project_domain_name: Default\n gnocchi::keystone::authtoken::project_name: service\n gnocchi::keystone::authtoken::user_domain_name: Default\n gnocchi::metricd::metric_processing_delay: 30\n gnocchi::metricd::workers: \'%{::os_workers}\'\n gnocchi::policy::policies: {}\n gnocchi::statsd::archive_policy_name: low\n gnocchi::statsd::flush_delay: 10\n gnocchi::statsd::project_id: 6c38cd8d-099a-4cb2-aecf-17be688e8616\n gnocchi::statsd::resource_id: 0a8b55df-f90f-491c-8cb9-7cdecec6fc26\n gnocchi::statsd::user_id: 27c0d3f8-e7ee-42f0-8317-72237d1c5ae3\n gnocchi::storage::ceph::ceph_conffile: /etc/ceph/ceph.conf\n gnocchi::storage::ceph::ceph_keyring: /etc/ceph/ceph.client.openstack.keyring\n gnocchi::storage::ceph::ceph_pool: metrics\n gnocchi::storage::ceph::ceph_username: openstack\n gnocchi::storage::s3::s3_access_key_id: \'\'\n gnocchi::storage::s3::s3_endpoint_url: \'\'\n gnocchi::storage::s3::s3_region_name: \'\'\n gnocchi::storage::s3::s3_secret_access_key: \'\'\n gnocchi::storage::swift::swift_auth_version: 3\n gnocchi::storage::swift::swift_authurl: http://172.17.1.10:5000/v3\n gnocchi::storage::swift::swift_endpoint_type: internalURL\n gnocchi::storage::swift::swift_key: EMxec4K6kuZGjZmkMwu8ZMgzM\n gnocchi::storage::swift::swift_user: service:gnocchi\n gnocchi::wsgi::apache::bind_host: internal_api\n gnocchi::wsgi::apache::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n gnocchi::wsgi::apache::ssl: false\n gnocchi::wsgi::apache::wsgi_process_display_name: gnocchi_wsgi\n gnocchi_redis_password: jv8TQJ7wGC7M7e6ez2GNPfke7\n hacluster_pwd: EGYhJtaGVMtRm42X\n haproxy_docker: true\n heat::api::bind_host: internal_api\n heat::api::service_name: httpd\n heat::api_cfn::bind_host: internal_api\n heat::api_cfn::service_name: httpd\n heat::cron::purge_deleted::age: \'30\'\n heat::cron::purge_deleted::age_type: days\n heat::cron::purge_deleted::destination: /dev/null\n heat::cron::purge_deleted::ensure: present\n heat::cron::purge_deleted::hour: \'0\'\n heat::cron::purge_deleted::maxdelay: \'3600\'\n heat::cron::purge_deleted::minute: \'1\'\n heat::cron::purge_deleted::month: \'*\'\n heat::cron::purge_deleted::monthday: \'*\'\n heat::cron::purge_deleted::user: heat\n heat::cron::purge_deleted::weekday: \'*\'\n heat::database_connection: mysql+pymysql://heat:PX3UgYPjTePXuhGMjM9vZV4Jq@172.17.1.10/heat?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n heat::db::database_db_max_retries: -1\n heat::db::database_max_retries: -1\n heat::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n heat::db::mysql::dbname: heat\n heat::db::mysql::host: 172.17.1.10\n heat::db::mysql::password: PX3UgYPjTePXuhGMjM9vZV4Jq\n heat::db::mysql::user: heat\n heat::debug: true\n heat::enable_proxy_headers_parsing: true\n heat::engine::auth_encryption_key: bfrkgRaAnCj6HfbXuNwQXhCKy6drEYJ6\n heat::engine::configure_delegated_roles: false\n heat::engine::convergence_engine: true\n heat::engine::heat_metadata_server_url: http://10.0.0.106:8000\n heat::engine::heat_waitcondition_server_url: http://10.0.0.106:8000/v1/waitcondition\n heat::engine::max_nested_stack_depth: 6\n heat::engine::max_resources_per_stack: 1000\n heat::engine::plugin_dirs: []\n heat::engine::trusts_delegated_roles: []\n heat::heat_keystone_clients_url: http://10.0.0.106:5000\n heat::keystone::auth::admin_url: http://172.17.1.10:8004/v1/%(tenant_id)s\n heat::keystone::auth::internal_url: http://172.17.1.10:8004/v1/%(tenant_id)s\n heat::keystone::auth::password: PX3UgYPjTePXuhGMjM9vZV4Jq\n heat::keystone::auth::public_url: http://10.0.0.106:8004/v1/%(tenant_id)s\n heat::keystone::auth::region: regionOne\n heat::keystone::auth::tenant: service\n heat::keystone::auth_cfn::admin_url: http://172.17.1.10:8000/v1\n heat::keystone::auth_cfn::internal_url: http://172.17.1.10:8000/v1\n heat::keystone::auth_cfn::password: PX3UgYPjTePXuhGMjM9vZV4Jq\n heat::keystone::auth_cfn::public_url: http://10.0.0.106:8000/v1\n heat::keystone::auth_cfn::region: regionOne\n heat::keystone::auth_cfn::tenant: service\n heat::keystone::authtoken::auth_uri: http://172.17.1.10:5000\n heat::keystone::authtoken::auth_url: http://172.17.1.10:5000\n heat::keystone::authtoken::password: PX3UgYPjTePXuhGMjM9vZV4Jq\n heat::keystone::authtoken::project_domain_name: Default\n heat::keystone::authtoken::project_name: service\n heat::keystone::authtoken::user_domain_name: Default\n heat::keystone::domain::domain_admin: heat_stack_domain_admin\n heat::keystone::domain::domain_admin_email: heat_stack_domain_admin@localhost\n heat::keystone::domain::domain_name: heat_stack\n heat::keystone::domain::domain_password: 9wgDeEYVcvATDqUWh2zFgNqfr\n heat::keystone_ec2_uri: http://172.17.1.10:5000/v3/ec2tokens\n heat::max_json_body_size: 4194304\n heat::notification_driver: messagingv2\n heat::policy::policies: {}\n heat::rabbit_heartbeat_timeout_threshold: 60\n heat::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n heat::rabbit_port: 5672\n heat::rabbit_use_ssl: \'False\'\n heat::rabbit_userid: guest\n heat::rpc_response_timeout: 600\n heat::wsgi::apache_api::bind_host: internal_api\n heat::wsgi::apache_api::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n heat::wsgi::apache_api::ssl: false\n heat::wsgi::apache_api_cfn::bind_host: internal_api\n heat::wsgi::apache_api_cfn::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n heat::wsgi::apache_api_cfn::ssl: false\n heat::yaql_limit_iterators: 1000\n heat::yaql_memory_quota: 100000\n horizon::allowed_hosts: [\'*\']\n horizon::bind_address: internal_api\n horizon::cache_backend: django.core.cache.backends.memcached.MemcachedCache\n horizon::customization_module: \'\'\n horizon::disable_password_reveal: true\n horizon::disallow_iframe_embed: true\n horizon::django_debug: true\n horizon::django_session_engine: django.contrib.sessions.backends.cache\n horizon::enable_secure_proxy_ssl_header: true\n horizon::enforce_password_check: true\n horizon::horizon_ca: /etc/ipa/ca.crt\n horizon::keystone_url: http://172.17.1.10:5000\n horizon::listen_ssl: false\n horizon::password_validator: \'\'\n horizon::password_validator_help: \'\'\n horizon::secret_key: baVGHUaJBz\n horizon::secure_cookies: false\n horizon::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n horizon::vhost_extra_params:\n access_log_format: \'%a %l %u %t \\"%r\\" %>s %b \\"%%{}{Referer}i\\" \\"%%{}{User-Agent}i\\"\'\n add_listen: true\n options: [FollowSymLinks, MultiViews]\n priority: 10\n kernel_modules:\n nf_conntrack: {}\n nf_conntrack_proto_sctp: {}\n keystone::admin_bind_host: \'%{hiera(\'\'fqdn_ctlplane\'\')}\'\n keystone::admin_password: XxK3Mh947xh2TVyaJJWb7myna\n keystone::admin_port: \'35357\'\n keystone::admin_token: 9kk8pyHvmhGqnvzwq2mFXv6dc\n keystone::config::keystone_config:\n ec2/driver: {value: keystone.contrib.ec2.backends.sql.Ec2}\n keystone::credential_keys:\n /etc/keystone/credential-keys/0: {content: NPokwa2QcGznUuI_j2TUX42gVSpiXXSF_Yn7VwRO_UM=}\n /etc/keystone/credential-keys/1: {content: aaFEmpncIiMFDqgERZW2kseCkvBDTMtbVz_pMwn2V20=}\n keystone::cron::token_flush::destination: /var/log/keystone/keystone-tokenflush.log\n keystone::cron::token_flush::ensure: present\n keystone::cron::token_flush::hour: [\'*\']\n keystone::cron::token_flush::maxdelay: 0\n keystone::cron::token_flush::minute: [\'1\']\n keystone::cron::token_flush::month: [\'*\']\n keystone::cron::token_flush::monthday: [\'*\']\n keystone::cron::token_flush::user: keystone\n keystone::cron::token_flush::weekday: [\'*\']\n keystone::database_connection: mysql+pymysql://keystone:9kk8pyHvmhGqnvzwq2mFXv6dc@172.17.1.10/keystone?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n keystone::db::database_db_max_retries: -1\n keystone::db::database_max_retries: -1\n keystone::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n keystone::db::mysql::dbname: keystone\n keystone::db::mysql::host: 172.17.1.10\n keystone::db::mysql::password: 9kk8pyHvmhGqnvzwq2mFXv6dc\n keystone::db::mysql::user: keystone\n keystone::debug: true\n keystone::enable_credential_setup: true\n keystone::enable_fernet_setup: true\n keystone::enable_proxy_headers_parsing: true\n keystone::enable_ssl: false\n keystone::endpoint::admin_url: http://192.168.24.10:35357\n keystone::endpoint::internal_url: http://172.17.1.10:5000\n keystone::endpoint::public_url: http://10.0.0.106:5000\n keystone::endpoint::region: regionOne\n keystone::endpoint::version: \'\'\n keystone::fernet_keys:\n /etc/keystone/fernet-keys/0: {content: pvfG6wdm6OG7qoLMEFYllsxpwsUpL-wrpzmOuzzfnEM=}\n /etc/keystone/fernet-keys/1: {content: yeoGbzomaV_Y6WdIYPbJJt-4g91xeD-q3XFO3fupMtE=}\n keystone::fernet_max_active_keys: 5\n keystone::fernet_replace_keys: true\n keystone::notification_driver: messagingv2\n keystone::notification_format: basic\n keystone::policy::policies: {}\n keystone::public_bind_host: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n keystone::rabbit_heartbeat_timeout_threshold: 60\n keystone::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n keystone::rabbit_port: 5672\n keystone::rabbit_use_ssl: \'False\'\n keystone::rabbit_userid: guest\n keystone::roles::admin::admin_tenant: admin\n keystone::roles::admin::email: admin@example.com\n keystone::roles::admin::password: XxK3Mh947xh2TVyaJJWb7myna\n keystone::roles::admin::service_tenant: service\n keystone::service_name: httpd\n keystone::token_provider: fernet\n keystone::wsgi::apache::admin_bind_host: ctlplane\n keystone::wsgi::apache::admin_port: \'35357\'\n keystone::wsgi::apache::bind_host: internal_api\n keystone::wsgi::apache::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n keystone::wsgi::apache::servername_admin: \'%{hiera(\'\'fqdn_ctlplane\'\')}\'\n keystone::wsgi::apache::ssl: false\n keystone::wsgi::apache::threads: 1\n keystone::wsgi::apache::workers: \'%{::os_workers}\'\n keystone_enable_db_purge: true\n keystone_enable_member: true\n keystone_ssl_certificate: \'\'\n keystone_ssl_certificate_key: \'\'\n memcached::listen_ip: internal_api\n memcached::max_memory: 50%\n memcached::udp_port: 0\n memcached::verbosity: v\n memcached_ipv6: false\n memcached_network: internal_api_subnet\n mysql::server::manage_config_file: true\n mysql::server::package_name: mariadb-galera-server\n mysql::server::root_password: 7xm4XA2YHK\n mysql_bind_host: internal_api\n mysql_clustercheck_password: Y842JReAdAaXZwRHfsjTtdqgg\n mysql_ipv6: false\n mysql_max_connections: 4096\n neutron::agents::dhcp::debug: true\n neutron::agents::dhcp::dnsmasq_dns_servers: []\n neutron::agents::dhcp::enable_force_metadata: true\n neutron::agents::dhcp::enable_isolated_metadata: false\n neutron::agents::dhcp::enable_metadata_network: false\n neutron::agents::dhcp::interface_driver: neutron.agent.linux.interface.OVSInterfaceDriver\n neutron::agents::metadata::auth_password: anbEgsRDNBffKrcVkyZd2wPYr\n neutron::agents::metadata::auth_tenant: service\n neutron::agents::metadata::auth_url: http://172.17.1.10:5000\n neutron::agents::metadata::debug: true\n neutron::agents::metadata::metadata_host: \'%{hiera(\'\'cloud_name_internal_api\'\')}\'\n neutron::agents::metadata::metadata_ip: \'%{hiera(\'\'nova_metadata_vip\'\')}\'\n neutron::agents::metadata::metadata_protocol: http\n neutron::agents::metadata::shared_secret: 3BMbzPEunTfkgG4nPEG4ZKUy8\n neutron::agents::ml2::ovs::local_ip: tenant\n neutron::allow_overlapping_ips: true\n neutron::bind_host: internal_api\n neutron::core_plugin: ml2\n neutron::db::database_db_max_retries: -1\n neutron::db::database_max_retries: -1\n neutron::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n neutron::db::mysql::dbname: ovs_neutron\n neutron::db::mysql::host: 172.17.1.10\n neutron::db::mysql::password: anbEgsRDNBffKrcVkyZd2wPYr\n neutron::db::mysql::user: neutron\n neutron::db::sync::db_sync_timeout: 300\n neutron::db::sync::extra_params: \'\'\n neutron::debug: true\n neutron::dhcp_agent_notification: true\n neutron::dns_domain: openstacklocal\n neutron::global_physnet_mtu: 1500\n neutron::host: \'%{::fqdn}\'\n neutron::keystone::auth::admin_url: http://172.17.1.10:9696\n neutron::keystone::auth::internal_url: http://172.17.1.10:9696\n neutron::keystone::auth::password: anbEgsRDNBffKrcVkyZd2wPYr\n neutron::keystone::auth::public_url: http://10.0.0.106:9696\n neutron::keystone::auth::region: regionOne\n neutron::keystone::auth::tenant: service\n neutron::keystone::authtoken::auth_uri: http://172.17.1.10:5000\n neutron::keystone::authtoken::auth_url: http://172.17.1.10:5000\n neutron::keystone::authtoken::password: anbEgsRDNBffKrcVkyZd2wPYr\n neutron::keystone::authtoken::project_domain_name: Default\n neutron::keystone::authtoken::project_name: service\n neutron::keystone::authtoken::user_domain_name: Default\n neutron::notification_driver: messagingv2\n neutron::plugins::ml2::extension_drivers: [port_security]\n neutron::plugins::ml2::firewall_driver: iptables_hybrid\n neutron::plugins::ml2::flat_networks: [datacentre]\n neutron::plugins::ml2::mechanism_drivers: [opendaylight_v2]\n neutron::plugins::ml2::network_vlan_ranges: [\'datacentre:1:1000\']\n neutron::plugins::ml2::opendaylight::port_binding_controller: pseudo-agentdb-binding\n neutron::plugins::ml2::overlay_ip_version: 4\n neutron::plugins::ml2::tenant_network_types: [vxlan]\n neutron::plugins::ml2::tunnel_id_ranges: [\'1:4094\']\n neutron::plugins::ml2::type_drivers: [vxlan, vlan, flat, gre]\n neutron::plugins::ml2::vni_ranges: [\'1:4094\']\n neutron::plugins::ovs::opendaylight::allowed_network_types: [local, flat, vlan,\n vxlan, gre]\n neutron::plugins::ovs::opendaylight::enable_dpdk: false\n neutron::plugins::ovs::opendaylight::enable_hw_offload: false\n neutron::plugins::ovs::opendaylight::odl_password: redhat\n neutron::plugins::ovs::opendaylight::odl_username: odladmin\n neutron::plugins::ovs::opendaylight::provider_mappings: [\'datacentre:br-ex\']\n neutron::plugins::ovs::opendaylight::vhostuser_mode: server\n neutron::plugins::ovs::opendaylight::vhostuser_socket_dir: /var/lib/vhost_sockets\n neutron::policy::policies: {}\n neutron::purge_config: false\n neutron::rabbit_heartbeat_timeout_threshold: 60\n neutron::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n neutron::rabbit_port: 5672\n neutron::rabbit_use_ssl: \'False\'\n neutron::rabbit_user: guest\n neutron::server::allow_automatic_l3agent_failover: \'True\'\n neutron::server::database_connection: mysql+pymysql://neutron:anbEgsRDNBffKrcVkyZd2wPYr@172.17.1.10/ovs_neutron?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n neutron::server::enable_dvr: false\n neutron::server::enable_proxy_headers_parsing: true\n neutron::server::notifications::auth_url: http://172.17.1.10:5000\n neutron::server::notifications::endpoint_type: internal\n neutron::server::notifications::password: 6BkAm2KjhQcBQjbmFfcxWwJUq\n neutron::server::notifications::project_name: service\n neutron::server::notifications::tenant_name: service\n neutron::server::router_distributed: false\n neutron::server::sync_db: true\n neutron::service_plugins: [odl-router_v2, trunk]\n nova::api::api_bind_address: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n nova::api::default_floating_pool: public\n nova::api::enable_proxy_headers_parsing: true\n nova::api::enabled: true\n nova::api::instance_name_template: instance-%08x\n nova::api::metadata_listen: internal_api\n nova::api::neutron_metadata_proxy_shared_secret: 3BMbzPEunTfkgG4nPEG4ZKUy8\n nova::api::service_name: httpd\n nova::api::sync_db_api: true\n nova::api_database_connection: mysql+pymysql://nova_api:6BkAm2KjhQcBQjbmFfcxWwJUq@172.17.1.10/nova_api?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n nova::cell0_database_connection: mysql+pymysql://nova:6BkAm2KjhQcBQjbmFfcxWwJUq@172.17.1.10/nova_cell0?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n nova::cinder_catalog_info: volumev3:cinderv3:internalURL\n nova::cron::archive_deleted_rows::destination: /var/log/nova/nova-rowsflush.log\n nova::cron::archive_deleted_rows::hour: \'0\'\n nova::cron::archive_deleted_rows::max_rows: \'100\'\n nova::cron::archive_deleted_rows::minute: \'1\'\n nova::cron::archive_deleted_rows::month: \'*\'\n nova::cron::archive_deleted_rows::monthday: \'*\'\n nova::cron::archive_deleted_rows::until_complete: false\n nova::cron::archive_deleted_rows::user: nova\n nova::cron::archive_deleted_rows::weekday: \'*\'\n nova::database_connection: mysql+pymysql://nova:6BkAm2KjhQcBQjbmFfcxWwJUq@172.17.1.10/nova?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n nova::db::database_db_max_retries: -1\n nova::db::database_max_retries: -1\n nova::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n nova::db::mysql::dbname: nova\n nova::db::mysql::host: 172.17.1.10\n nova::db::mysql::password: 6BkAm2KjhQcBQjbmFfcxWwJUq\n nova::db::mysql::user: nova\n nova::db::mysql_api::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n nova::db::mysql_api::dbname: nova_api\n nova::db::mysql_api::host: 172.17.1.10\n nova::db::mysql_api::password: 6BkAm2KjhQcBQjbmFfcxWwJUq\n nova::db::mysql_api::setup_cell0: true\n nova::db::mysql_api::user: nova_api\n nova::db::mysql_placement::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n nova::db::mysql_placement::dbname: nova_placement\n nova::db::mysql_placement::host: 172.17.1.10\n nova::db::mysql_placement::password: 6BkAm2KjhQcBQjbmFfcxWwJUq\n nova::db::mysql_placement::user: nova_placement\n nova::db::sync::db_sync_timeout: 300\n nova::db::sync_api::db_sync_timeout: 300\n nova::debug: true\n nova::glance_api_servers: http://172.17.1.10:9292\n nova::host: \'%{::fqdn}\'\n nova::keystone::auth::admin_url: http://172.17.1.10:8774/v2.1\n nova::keystone::auth::internal_url: http://172.17.1.10:8774/v2.1\n nova::keystone::auth::password: 6BkAm2KjhQcBQjbmFfcxWwJUq\n nova::keystone::auth::public_url: http://10.0.0.106:8774/v2.1\n nova::keystone::auth::region: regionOne\n nova::keystone::auth::tenant: service\n nova::keystone::auth_placement::admin_url: http://172.17.1.10:8778/placement\n nova::keystone::auth_placement::internal_url: http://172.17.1.10:8778/placement\n nova::keystone::auth_placement::password: 6BkAm2KjhQcBQjbmFfcxWwJUq\n nova::keystone::auth_placement::public_url: http://10.0.0.106:8778/placement\n nova::keystone::auth_placement::region: regionOne\n nova::keystone::auth_placement::tenant: service\n nova::keystone::authtoken::auth_uri: http://172.17.1.10:5000\n nova::keystone::authtoken::auth_url: http://192.168.24.10:35357\n nova::keystone::authtoken::password: 6BkAm2KjhQcBQjbmFfcxWwJUq\n nova::keystone::authtoken::project_domain_name: Default\n nova::keystone::authtoken::project_name: service\n nova::keystone::authtoken::user_domain_name: Default\n nova::my_ip: internal_api\n nova::network::neutron::dhcp_domain: \'\'\n nova::network::neutron::neutron_auth_type: v3password\n nova::network::neutron::neutron_auth_url: http://192.168.24.10:35357/v3\n nova::network::neutron::neutron_ovs_bridge: br-int\n nova::network::neutron::neutron_password: anbEgsRDNBffKrcVkyZd2wPYr\n nova::network::neutron::neutron_project_name: service\n nova::network::neutron::neutron_region_name: regionOne\n nova::network::neutron::neutron_url: http://172.17.1.10:9696\n nova::network::neutron::neutron_username: neutron\n nova::notification_driver: messagingv2\n nova::notification_format: unversioned\n nova::notify_on_state_change: vm_and_task_state\n nova::placement::auth_url: http://172.17.1.10:5000\n nova::placement::os_interface: internal\n nova::placement::os_region_name: regionOne\n nova::placement::password: 6BkAm2KjhQcBQjbmFfcxWwJUq\n nova::placement::project_name: service\n nova::placement_database_connection: mysql+pymysql://nova_placement:6BkAm2KjhQcBQjbmFfcxWwJUq@172.17.1.10/nova_placement?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n nova::policy::policies: {}\n nova::purge_config: false\n nova::rabbit_heartbeat_timeout_threshold: 60\n nova::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n nova::rabbit_port: 5672\n nova::rabbit_use_ssl: \'False\'\n nova::rabbit_userid: guest\n nova::ram_allocation_ratio: \'1.0\'\n nova::scheduler::discover_hosts_in_cells_interval: -1\n nova::scheduler::filter::scheduler_available_filters: []\n nova::scheduler::filter::scheduler_default_filters: []\n nova::scheduler::filter::scheduler_max_attempts: 3\n nova::use_ipv6: false\n nova::vncproxy::common::vncproxy_host: 10.0.0.106\n nova::vncproxy::common::vncproxy_port: \'6080\'\n nova::vncproxy::common::vncproxy_protocol: http\n nova::vncproxy::enabled: true\n nova::vncproxy::host: internal_api\n nova::wsgi::apache_api::bind_host: internal_api\n nova::wsgi::apache_api::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n nova::wsgi::apache_api::ssl: false\n nova::wsgi::apache_placement::api_port: \'8778\'\n nova::wsgi::apache_placement::bind_host: internal_api\n nova::wsgi::apache_placement::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n nova::wsgi::apache_placement::ssl: false\n nova_enable_db_purge: true\n nova_wsgi_enabled: true\n ntp::iburst_enable: true\n \'ntp::maxpoll:\': 10\n \'ntp::minpoll:\': 6\n ntp::servers: [clock.redhat.com]\n opendaylight::extra_features: [odl-mdsal-trace, odl-netvirt-openstack, odl-jolokia]\n opendaylight::log_levels: {org.opendaylight.genius: DEBUG, org.opendaylight.netvirt: DEBUG}\n opendaylight::log_max_rollover: 50\n opendaylight::log_mechanism: console\n opendaylight::manage_repositories: false\n opendaylight::odl_bind_ip: internal_api\n opendaylight::odl_rest_port: \'8081\'\n opendaylight::password: redhat\n opendaylight::snat_mechanism: conntrack\n opendaylight::username: odladmin\n opendaylight_check_url: restconf/operational/network-topology:network-topology/topology/netvirt:1\n pacemaker::corosync::cluster_name: tripleo_cluster\n pacemaker::corosync::manage_fw: false\n pacemaker::corosync::settle_tries: 360\n pacemaker::resource_defaults::defaults:\n resource-stickiness: {value: INFINITY}\n panko::api::enable_proxy_headers_parsing: true\n panko::api::event_time_to_live: \'86400\'\n panko::api::host: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n panko::api::service_name: httpd\n panko::auth::auth_password: rxzxqxVRqj9egU8HnnR44EDNu\n panko::auth::auth_region: regionOne\n panko::auth::auth_tenant_name: service\n panko::auth::auth_url: http://172.17.1.10:5000\n panko::db::database_connection: mysql+pymysql://panko:rxzxqxVRqj9egU8HnnR44EDNu@172.17.1.10/panko?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n panko::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n panko::db::mysql::dbname: panko\n panko::db::mysql::host: 172.17.1.10\n panko::db::mysql::password: rxzxqxVRqj9egU8HnnR44EDNu\n panko::db::mysql::user: panko\n panko::debug: true\n panko::expirer::hour: \'0\'\n panko::expirer::minute: \'1\'\n panko::expirer::month: \'*\'\n panko::expirer::monthday: \'*\'\n panko::expirer::weekday: \'*\'\n panko::keystone::auth::admin_url: http://172.17.1.10:8977\n panko::keystone::auth::internal_url: http://172.17.1.10:8977\n panko::keystone::auth::password: rxzxqxVRqj9egU8HnnR44EDNu\n panko::keystone::auth::public_url: http://10.0.0.106:8977\n panko::keystone::auth::region: regionOne\n panko::keystone::auth::tenant: service\n panko::keystone::authtoken::auth_uri: http://172.17.1.10:5000\n panko::keystone::authtoken::auth_url: http://172.17.1.10:5000\n panko::keystone::authtoken::password: rxzxqxVRqj9egU8HnnR44EDNu\n panko::keystone::authtoken::project_domain_name: Default\n panko::keystone::authtoken::project_name: service\n panko::keystone::authtoken::user_domain_name: Default\n panko::policy::policies: {}\n panko::wsgi::apache::bind_host: internal_api\n panko::wsgi::apache::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n panko::wsgi::apache::ssl: false\n rabbit_ipv6: false\n rabbitmq::default_pass: weVyVyHzxXn9URCQNmHmUCsYg\n rabbitmq::default_user: guest\n rabbitmq::delete_guest_user: false\n rabbitmq::erlang_cookie: wMGzfECCXTCuVVgpTMBH\n rabbitmq::file_limit: 65536\n rabbitmq::interface: internal_api\n rabbitmq::nr_ha_queues: -1\n rabbitmq::package_provider: yum\n rabbitmq::package_source: undef\n rabbitmq::port: 5672\n rabbitmq::repos_ensure: false\n rabbitmq::service_manage: false\n rabbitmq::ssl: false\n rabbitmq::ssl_depth: 1\n rabbitmq::ssl_erl_dist: false\n rabbitmq::ssl_interface: internal_api\n rabbitmq::ssl_only: false\n rabbitmq::ssl_port: 5672\n rabbitmq::tcp_keepalive: true\n rabbitmq::wipe_db_on_cookie_change: true\n rabbitmq_config_variables: {cluster_partition_handling: ignore, loopback_users: \'[]\',\n queue_master_locator: <<"min-masters">>}\n rabbitmq_environment: {NODE_IP_ADDRESS: \'\', NODE_PORT: \'\', RABBITMQ_NODENAME: \'rabbit@%{::hostname}\',\n RABBITMQ_SERVER_ERL_ARGS: \'"+K true +P 1048576 -kernel inet_default_connect_options\n [{nodelay,true}]"\', export ERL_EPMD_ADDRESS: \'%{hiera(\'\'rabbitmq::interface\'\')}\'}\n rabbitmq_kernel_variables: {inet_dist_listen_max: \'25672\', inet_dist_listen_min: \'25672\',\n net_ticktime: 15}\n redis::bind: internal_api\n redis::managed_by_cluster_manager: true\n redis::masterauth: jv8TQJ7wGC7M7e6ez2GNPfke7\n redis::notify_service: false\n redis::port: 6379\n redis::requirepass: jv8TQJ7wGC7M7e6ez2GNPfke7\n redis::sentinel::master_name: \'%{hiera(\'\'bootstrap_nodeid\'\')}\'\n redis::sentinel::notification_script: /usr/local/bin/redis-notifications.sh\n redis::sentinel::redis_host: \'%{hiera(\'\'bootstrap_nodeid_ip\'\')}\'\n redis::sentinel::sentinel_bind: internal_api\n redis::sentinel_auth_pass: jv8TQJ7wGC7M7e6ez2GNPfke7\n redis::service_manage: false\n redis::ulimit: \'10240\'\n redis_ipv6: false\n snmp::agentaddress: [\'udp:161\', \'udp6:[::1]:161\']\n snmp::snmpd_options: -LS0-5d\n snmpd_network: internal_api_subnet\n swift::keystone::auth::admin_url: http://172.17.3.10:8080\n swift::keystone::auth::admin_url_s3: http://172.17.3.10:8080\n swift::keystone::auth::configure_s3_endpoint: false\n swift::keystone::auth::internal_url: http://172.17.3.10:8080/v1/AUTH_%(tenant_id)s\n swift::keystone::auth::internal_url_s3: http://172.17.3.10:8080\n swift::keystone::auth::operator_roles: [admin, swiftoperator, ResellerAdmin]\n swift::keystone::auth::password: 2Q6kxeNrvczRgVewcjWhEwnaJ\n swift::keystone::auth::public_url: http://10.0.0.106:8080/v1/AUTH_%(tenant_id)s\n swift::keystone::auth::public_url_s3: http://10.0.0.106:8080\n swift::keystone::auth::region: regionOne\n swift::keystone::auth::tenant: service\n swift::proxy::account_autocreate: true\n swift::proxy::authtoken::auth_uri: http://172.17.1.10:5000\n swift::proxy::authtoken::auth_url: http://172.17.1.10:5000\n swift::proxy::authtoken::password: 2Q6kxeNrvczRgVewcjWhEwnaJ\n swift::proxy::authtoken::project_name: service\n swift::proxy::keystone::operator_roles: [admin, swiftoperator, ResellerAdmin]\n swift::proxy::node_timeout: 60\n swift::proxy::pipeline: [catch_errors, healthcheck, proxy-logging, cache, ratelimit,\n bulk, tempurl, formpost, authtoken, keystone, staticweb, copy, container_quotas,\n account_quotas, slo, dlo, versioned_writes, proxy-logging, proxy-server]\n swift::proxy::port: \'8080\'\n swift::proxy::proxy_local_net_ip: storage\n swift::proxy::staticweb::url_base: http://10.0.0.106:8080\n swift::proxy::versioned_writes::allow_versioned_writes: true\n swift::proxy::workers: auto\n swift::storage::all::account_pipeline: [healthcheck, account-server]\n swift::storage::all::account_server_workers: auto\n swift::storage::all::container_pipeline: [healthcheck, container-server]\n swift::storage::all::container_server_workers: auto\n swift::storage::all::incoming_chmod: Du=rwx,g=rx,o=rx,Fu=rw,g=r,o=r\n swift::storage::all::mount_check: false\n swift::storage::all::object_pipeline: [healthcheck, recon, object-server]\n swift::storage::all::object_server_workers: auto\n swift::storage::all::outgoing_chmod: Du=rwx,g=rx,o=rx,Fu=rw,g=r,o=r\n swift::storage::all::storage_local_net_ip: storage_mgmt\n swift::storage::disks::args: {}\n swift::swift_hash_path_suffix: fyaC6RwBa3bC93pAgcmRf3CXd\n sysctl_settings:\n fs.inotify.max_user_instances: {value: 1024}\n fs.suid_dumpable: {value: 0}\n kernel.dmesg_restrict: {value: 1}\n kernel.pid_max: {value: 1048576}\n net.core.netdev_max_backlog: {value: 10000}\n net.ipv4.conf.all.arp_accept: {value: 1}\n net.ipv4.conf.all.log_martians: {value: 1}\n net.ipv4.conf.all.secure_redirects: {value: 0}\n net.ipv4.conf.all.send_redirects: {value: 0}\n net.ipv4.conf.default.accept_redirects: {value: 0}\n net.ipv4.conf.default.log_martians: {value: 1}\n net.ipv4.conf.default.secure_redirects: {value: 0}\n net.ipv4.conf.default.send_redirects: {value: 0}\n net.ipv4.ip_forward: {value: 1}\n net.ipv4.neigh.default.gc_thresh1: {value: 1024}\n net.ipv4.neigh.default.gc_thresh2: {value: 2048}\n net.ipv4.neigh.default.gc_thresh3: {value: 4096}\n net.ipv4.tcp_keepalive_intvl: {value: 1}\n net.ipv4.tcp_keepalive_probes: {value: 5}\n net.ipv4.tcp_keepalive_time: {value: 5}\n net.ipv6.conf.all.accept_ra: {value: 0}\n net.ipv6.conf.all.accept_redirects: {value: 0}\n net.ipv6.conf.all.autoconf: {value: 0}\n net.ipv6.conf.all.disable_ipv6: {value: 0}\n net.ipv6.conf.default.accept_ra: {value: 0}\n net.ipv6.conf.default.accept_redirects: {value: 0}\n net.ipv6.conf.default.autoconf: {value: 0}\n net.ipv6.conf.default.disable_ipv6: {value: 0}\n net.netfilter.nf_conntrack_max: {value: 500000}\n net.nf_conntrack_max: {value: 500000}\n timezone::timezone: Europe/London\n tripleo.aodh_api.firewall_rules:\n 128 aodh-api:\n dport: [8042, 13042]\n tripleo.cinder_api.firewall_rules:\n 119 cinder:\n dport: [8776, 13776]\n tripleo.cinder_volume.firewall_rules:\n 120 iscsi initiator: {dport: 3260}\n tripleo.glance_api.firewall_rules:\n 112 glance_api:\n dport: [9292, 13292]\n tripleo.gnocchi_api.firewall_rules:\n 129 gnocchi-api:\n dport: [8041, 13041]\n tripleo.gnocchi_statsd.firewall_rules:\n 140 gnocchi-statsd: {dport: 8125, proto: udp}\n tripleo.haproxy.firewall_rules:\n 107 haproxy stats: {dport: 1993}\n tripleo.heat_api.firewall_rules:\n 125 heat_api:\n dport: [8004, 13004]\n tripleo.heat_api_cfn.firewall_rules:\n 125 heat_cfn:\n dport: [8000, 13800]\n tripleo.horizon.firewall_rules:\n 127 horizon:\n dport: [80, 443]\n tripleo.keystone.firewall_rules:\n 111 keystone:\n dport: [5000, 13000, \'35357\']\n tripleo.memcached.firewall_rules:\n 121 memcached: {dport: 11211, proto: tcp, source: \'%{hiera(\'\'memcached_network\'\')}\'}\n tripleo.mysql.firewall_rules:\n 104 mysql galera-bundle:\n dport: [873, 3123, 3306, 4444, 4567, 4568, 9200]\n tripleo.neutron_api.firewall_rules:\n 114 neutron api:\n dport: [9696, 13696]\n tripleo.neutron_dhcp.firewall_rules:\n 115 neutron dhcp input: {dport: 67, proto: udp}\n 116 neutron dhcp output: {chain: OUTPUT, dport: 68, proto: udp}\n tripleo.nova_api.firewall_rules:\n 113 nova_api:\n dport: [8774, 13774, 8775]\n tripleo.nova_placement.firewall_rules:\n 138 nova_placement:\n dport: [8778, 13778]\n tripleo.nova_vnc_proxy.firewall_rules:\n 137 nova_vnc_proxy:\n dport: [6080, 13080]\n tripleo.ntp.firewall_rules:\n 105 ntp: {dport: 123, proto: udp}\n tripleo.opendaylight_api.firewall_rules:\n 137 opendaylight api:\n dport: [\'8081\', 6640, 6653, 2550, 8185]\n tripleo.opendaylight_ovs.firewall_rules:\n 118 neutron vxlan networks: {dport: 4789, proto: udp}\n 136 neutron gre networks: {proto: gre}\n tripleo.pacemaker.firewall_rules:\n 130 pacemaker tcp:\n dport: [2224, 3121, 21064]\n proto: tcp\n 131 pacemaker udp: {dport: 5405, proto: udp}\n tripleo.panko_api.firewall_rules:\n 140 panko-api:\n dport: [8977, 13977]\n tripleo.rabbitmq.firewall_rules:\n 109 rabbitmq-bundle:\n dport: [3122, 4369, 5672, 25672]\n tripleo.redis.firewall_rules:\n 108 redis-bundle:\n dport: [3124, 6379, 26379]\n tripleo.snmp.firewall_rules:\n 124 snmp: {dport: 161, proto: udp, source: \'%{hiera(\'\'snmpd_network\'\')}\'}\n tripleo.swift_proxy.firewall_rules:\n 122 swift proxy:\n dport: [8080, 13808]\n tripleo.swift_storage.firewall_rules:\n 123 swift storage:\n dport: [873, 6000, 6001, 6002]\n tripleo::fencing::config: {}\n tripleo::firewall::manage_firewall: true\n tripleo::firewall::purge_firewall_rules: false\n tripleo::glance::nfs_mount::edit_fstab: false\n tripleo::glance::nfs_mount::options: _netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0\n tripleo::glance::nfs_mount::share: \'\'\n tripleo::haproxy::ca_bundle: /etc/ipa/ca.crt\n tripleo::haproxy::crl_file: null\n tripleo::haproxy::haproxy_log_address: /dev/log\n tripleo::haproxy::haproxy_service_manage: false\n tripleo::haproxy::haproxy_stats: true\n tripleo::haproxy::haproxy_stats_password: FZRXHrrCRZcvdmsQ9P9sjKWJj\n tripleo::haproxy::haproxy_stats_user: admin\n tripleo::haproxy::mysql_clustercheck: true\n tripleo::haproxy::redis_password: jv8TQJ7wGC7M7e6ez2GNPfke7\n tripleo::packages::enable_install: false\n tripleo::profile::base::cinder::cinder_enable_db_purge: true\n tripleo::profile::base::cinder::volume::cinder_enable_iscsi_backend: true\n tripleo::profile::base::cinder::volume::cinder_enable_nfs_backend: false\n tripleo::profile::base::cinder::volume::cinder_enable_rbd_backend: false\n tripleo::profile::base::cinder::volume::iscsi::cinder_iscsi_address: storage\n tripleo::profile::base::cinder::volume::iscsi::cinder_iscsi_helper: lioadm\n tripleo::profile::base::cinder::volume::iscsi::cinder_iscsi_protocol: iscsi\n tripleo::profile::base::cinder::volume::iscsi::cinder_lvm_loop_device_size: 16384\n tripleo::profile::base::cinder::volume::nfs::cinder_nas_secure_file_operations: \'False\'\n tripleo::profile::base::cinder::volume::nfs::cinder_nas_secure_file_permissions: \'False\'\n tripleo::profile::base::cinder::volume::nfs::cinder_nfs_mount_options: \'\'\n tripleo::profile::base::cinder::volume::nfs::cinder_nfs_servers: []\n tripleo::profile::base::cinder::volume::rbd::cinder_rbd_ceph_conf: /etc/ceph/ceph.conf\n tripleo::profile::base::cinder::volume::rbd::cinder_rbd_extra_pools: []\n tripleo::profile::base::cinder::volume::rbd::cinder_rbd_pool_name: volumes\n tripleo::profile::base::cinder::volume::rbd::cinder_rbd_user_name: openstack\n tripleo::profile::base::database::mysql::bind_address: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n tripleo::profile::base::database::mysql::client::enable_ssl: false\n tripleo::profile::base::database::mysql::client::mysql_client_bind_address: internal_api\n tripleo::profile::base::database::mysql::client::ssl_ca: /etc/ipa/ca.crt\n tripleo::profile::base::database::mysql::client_bind_address: internal_api\n tripleo::profile::base::database::mysql::generate_dropin_file_limit: true\n tripleo::profile::base::database::redis::tls_proxy_bind_ip: internal_api\n tripleo::profile::base::database::redis::tls_proxy_fqdn: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n tripleo::profile::base::database::redis::tls_proxy_port: 6379\n tripleo::profile::base::docker::additional_sockets: [/var/lib/openstack/docker.sock]\n tripleo::profile::base::docker::configure_network: true\n tripleo::profile::base::docker::debug: true\n tripleo::profile::base::docker::docker_options: --log-driver=journald --signature-verification=false\n --iptables=false --live-restore\n tripleo::profile::base::docker::insecure_registries: [\'192.168.24.1:8787\']\n tripleo::profile::base::docker::network_options: --bip=172.31.0.1/24\n tripleo::profile::base::glance::api::glance_nfs_enabled: false\n tripleo::profile::base::glance::api::tls_proxy_bind_ip: internal_api\n tripleo::profile::base::glance::api::tls_proxy_fqdn: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n tripleo::profile::base::glance::api::tls_proxy_port: \'9292\'\n tripleo::profile::base::gnocchi::api::gnocchi_backend: swift\n tripleo::profile::base::gnocchi::api::incoming_storage_driver: redis\n tripleo::profile::base::haproxy::certificates_specs: {}\n tripleo::profile::base::heat::manage_db_purge: true\n tripleo::profile::base::keystone::ceilometer_notification_topics: [notifications]\n tripleo::profile::base::keystone::extra_notification_topics: []\n tripleo::profile::base::keystone::heat_admin_domain: heat_stack\n tripleo::profile::base::keystone::heat_admin_email: heat_stack_domain_admin@localhost\n tripleo::profile::base::keystone::heat_admin_password: 9wgDeEYVcvATDqUWh2zFgNqfr\n tripleo::profile::base::keystone::heat_admin_user: heat_stack_domain_admin\n tripleo::profile::base::lvm::enable_udev: false\n tripleo::profile::base::neutron::dhcp_agent_wrappers::dnsmasq_image: 192.168.24.1:8787/rhosp13/openstack-neutron-dhcp-agent:2018-07-13.1\n tripleo::profile::base::neutron::dhcp_agent_wrappers::dnsmasq_process_wrapper: /var/lib/neutron/dnsmasq_wrapper\n tripleo::profile::base::neutron::dhcp_agent_wrappers::enable_dnsmasq_wrapper: true\n tripleo::profile::base::neutron::dhcp_agent_wrappers::enable_haproxy_wrapper: true\n tripleo::profile::base::neutron::dhcp_agent_wrappers::haproxy_image: 192.168.24.1:8787/rhosp13/openstack-neutron-dhcp-agent:2018-07-13.1\n tripleo::profile::base::neutron::dhcp_agent_wrappers::haproxy_process_wrapper: /var/lib/neutron/dhcp_haproxy_wrapper\n tripleo::profile::base::neutron::plugins::ovs::opendaylight::vhostuser_socket_group: qemu\n tripleo::profile::base::neutron::plugins::ovs::opendaylight::vhostuser_socket_user: qemu\n tripleo::profile::base::neutron::server::l3_ha_override: \'\'\n tripleo::profile::base::neutron::server::tls_proxy_bind_ip: internal_api\n tripleo::profile::base::neutron::server::tls_proxy_fqdn: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n tripleo::profile::base::neutron::server::tls_proxy_port: \'9696\'\n tripleo::profile::base::pacemaker::remote_authkey: y4KQqvu9wPzQBRZhYd4rU87e8sDHzd8RcQuWBDcbN7QAFqAuuXmaEs4wA6CNhbFRnAYbqRMVybQtMY8ghJKxEbn6tyxRbaKGtnsmWn7XyYGtKedWc9298WxAMQ2vRzuTaj4tRNVYvMbmfpKZTZARAVGmsYPR47ahNKBUFWkfqJR7dmXCjK6QdAYXktnkCXyxu8ZTYhHpzDUfTM2UaPxYkpXNZHkMwzDjVuKcQGNfbsMyJBTCsM2GhzaYxahnaNeBk7zxcUr6W7KJPZhZfRdcDyXrATKnjnGvbRVaqd2uRuG4dZaHZEAEJAtB6TqknbsssFnjEm4scUcsBvpNTqRq7kFBZNcKvGwvypFBcZRTvkj7vYRRpfy4uMGU32YYDUgxtpkJA9PJ8R4H2euH6RhgRXDZjXnw8JbFDE7XpDYdB2DVWMeA7XVPJaxQWb4QzNkpGHkKRxURncMc38RDqMBdkfAAkFxB34e2TPMBFJPM89NPkNPxbGPGrwjbJQFHQWFXG6zuh3AQFFXTU6TsbXDVP3hmMpCtjjZdakyb2tJf3dWXH4FJXsgmTxUz6d8DbDH6AmwyWxNYzng4sUPgQHpxhjh6syBURXUCphjjf3DGndbUUTT6paw8vnERsnpWDEUvbafrKXuZXJYEMB6EA3KjqRdra9nhTrYusybqfHjQRNP6tKFEuz3kbMHBaNXmyy28dVCFAbTCJkfuH7p4j2TaAezaFv4VHRYjWNbN3vuHgDAMT6vrNw7xukcvfWmef9e8DgZaxXdeyWgmxPWfZKfEGweXVqZkcFUuhcxYNmN7tdfdnakyu9XHayeEYYPEXWKDYDVynTrnBdrh3tY2TT4YRcwwcNFEKsex2NF8QCPNuY8HwRMDAuFAc7E786XyFcu9VvCAcjAB9nyaP7c2XRuuUDKVHwtysNU4UCcJGrpu2RZVQUgGgPNmFvjtcZKCDncGDpHwshc7kYkXNyPb64yBeVvjnKPtAV2Qj9fYWuQ8hw9UNTqVrYrNF4XjD3mMTqre6W4mWMtmEA3nWw7Aq2WWJwE9vFdaufPEkvgUpwWrveRmJxKDmAZuR83rWEYcCG6Bzj6BqgYPxh7VMsuBvVRg3B8tMtFyypqrtKKN4ewJkWyrZXWRtgd9P7pePhqMEBv8sDgmBXZ67uJ6Am9M2yWfA2UJxCp7Mj6hXpxefrKaU2hcbun9g846UurmTHMgcPWH4QBVkB732uttkpCU7XKkyysCUDB2KyxNz4zX8ek3tFbT3AVsMmNBa9cTXrmymCK7GZVBA8ZDv6T3sdeBh47a7YFGr3JMZJfzGC8vYAagkXNawNRZeU8zwPwwrwjKRhgDcAhVcT6QsPMCGUWsfwajzXpUgMuRwYfbw7MuMmn4KN78pGxnFEvy3ePdm7jukNxp2FEhVbEAewjB6eUrGbz2zKVntfdB4wXmXFt6Kevk8zG2PxGJdZJJeRdYsYWzgdVYaDDfhTHU4FN3sB7jdHx22YP4dFnka8ce6kdxtEZ6gyYywDwDqCMJqRNUdteXXfBXTMTBrNxPYdc9zz7tJCWM66M3RNBW4PzFNUFEeMEDPVgwpRjYeXRjWwPtcunFx8wDrBEanEFkFYB8ND2M6cP8tVjGMsBr34VeqZQvqUebDmPjKEjfe9UtkCWkxBuRaQyreNXeVvzpGD9j4xC2quqYgpRBW3XrEyz2uuce3vQpcaH92nTVbcfxwG6eUTwbPzzxZxjsjmrHQv3jXqdkmWF3utXvNzWz3FxqaVA2gqpF83radJebCUqcmab9VZb6mQY6WKM3ypPsQmHrgtMRcaYXaByTRF2HWxQxBZZfmhVFMTA9Dw3dZJxYqWtrZd6QzjKVPz3FN7fsV3w8T8NnqdcBUjqEmXHZ8q8umW7MMVgRssg8zN8D6RJvppAhKZEjkUDYfBKXqcn3mCdxueAm8FWQR34KmzxdKj8XDeX39Nv2CnWkWyRwA3A9qyvNWMJzWDG3gDGDPbG4dajWPTutmRWQTAqdfUhYy4XqQjfmWRT2mvtPfnGqjMvceQMhjb7GhHv7HTAfv3gAzrZWEpYkdXP7YHgWr2urNE7JAMRpd2CCh3hbJzWZ2twbWAKdtuM2HTjFaysjBDAspcGJCWugzeVBmPgEfRp9MawmCr4Q8yfb4zxdCFNzvTNTxKxs3Jn9ZP2vKNYWawjyx4UEUQvANhNh8Jsgver3PBWGtAW46EnyQEfTNxu8CFzGg2XsrvEYsxQEqsvMfc7KHGw76XRAmupFxXJDNmQeKfGEwZuPyekPRvb9eE8xqYBfsMwGxqwDhafWsktscPPXcurFFetZbNrhNvDDhxsgft8znzaz6g2jFjQpKtVYgkgvWGjFVcMcGq4KYXVKtuHwG76QkMMnygCEM2AKN8nczknAjcDZncHeX6Vbn9yawtVPC8RbdfkYgBsRYJ8MPVgmxrXQRJgfHZExnPFeesFGFrggDw3aFAWmF9TtKaVCv3DqWp6yAHvZqKryzCgUrYPpdmjhKYYFm8u9weqZhanVKuRHcCKx2nPa6PyBsn4FFrhAjU4BNExMUPDyFyZn3TfZ38FxgQ8nhKYBkfYbEksj8eA9bfGgbuzvkfdYU83FBX9Xc2Kqk9YRvEk2Y4zBf3awHD7dPYHGP44JberWYmyAZNQJkKRdFtZgjdEdqDnhjRhx8eA2YYgNyQe4HJZuzNMtTvgVezZUfg2RWzDHBYKrEpte9QPvEMqf9nQCgMka8ezKCWHFueKHXBvNgX6YaxDbNPTxvRkhDbT9M9JC4FZFTFvRXcfHuNaUwWUDaenzrVM3CuZ2Xm6sBGAeExXJECyBHgb7gF4XhJh3ARvMqxPatBE9EyuFzB8rwD7ADFxxVvEDhB9hEgDTXzrcrACnHmtzXjPZUPCjZW7uQmAcHcPjURaQEYQ8VKCVZqJjNbde2k6gBw7syMeaFEMBVeRKxm3gHpvbVzHBYeszfD23PD83Ujrz4WznxJqb37cGMJysnfDf4Rny8URJrxMtwkyAX6xcqYbtBF4ZvvcyBauZUa8KNqCbNNqpfymHvngQfsAURUUQ7JGXts2773A8FkdbnmXxZHG3hhrG2Vdm4vmFVWMwXEydtrDrhEqbFRBZAWwGp3drPczXnD74DTU2s4Cx7Y2ZdxgCtx34uncANgTj3HDe4e8ZCfUHvs72E8TwBVEV8bC47pWJ2MybEBBPMWXvdJzNvapTThdHEssAp8dcK8qrE4FtAsTFUuAV8RGyfTuTacaXKash24hmaUwKPmxW8ynGaF43ZrxHxFtwdQmjbVJhwKh66XTr8aynJdGAy8gdUp8vPcCF6RTgfzTAzUWPvdttTZxrtm44wqpqKHUFFynvcu6G3GY4qf33ZCNAu3JuA39KggkPmMThsQFNhGB779HxwFjYdkYGwwFGR4ZQCnTVCD37jUszGewyzqw7Ecd67WFpXtkpDHXdfcGffB6X8KuwZvV7upjpCnZCFe2JzFA2uhpXha7Wg9dHQbuUFVakuatwsDsZQFZRwFKrqHHVhjyNJtgDMwvu8tnHMyF9wJbuqQcy6wFNFPFxTVsnVNDY9YjKuUfvnQt2Hbyz8HPywh7rcGPPZNeTju3tvssFTGmhFxa6gXPE8cWaq6DpWVaDTNkwkspxu3AGUw6jgpwy4ECBErQgt4MbKaHvaEZWFeM7gr8hUGaBGWWacuF396bBV43KyvaQfpvhCqwHYBvt9fmhcw2y67fTUahqv38wgEUYaggVx4YhfDtWHwXr3TsRQEzuTxs3yAG3YcucGYkEaZCCth4HDvgRXwGPJXcPMADdvg8ZCJNrWqPGRwqNx9qeq37BAqGqdsWXzVPQ4aXsTRNGUMnAarmaPTCAsMj63csTf29QCg442UUU48W936AHgWmKAC4NbyTquPge8XpXYRpG2bYqtzcZbsYJNGB6NfH8bKmUC9h9sECjjfjj6zp9tKnRcV6TwPVmX4KGFrN3wRG3tsnDEX22xbDE9fF3X7BsqFnGbJMeQKhxj3vTRtapwgqmRXpCMeF6XX7kuW4yQavvz4qDkt3wwhvYYUxhr9MmtphsNtUuBxncTt3gkrPTKMewpeUzfhDQCy3b\n tripleo::profile::base::rabbitmq::enable_internal_tls: false\n tripleo::profile::base::snmp::snmpd_password: e0e6f3b1f8575fd51ee080d6b2724feef235ed7e\n tripleo::profile::base::snmp::snmpd_user: ro_snmp_user\n tripleo::profile::base::sshd::bannertext: \'\'\n tripleo::profile::base::sshd::motd: \'\'\n tripleo::profile::base::sshd::options:\n AcceptEnv: [LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES,\n LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT, LC_IDENTIFICATION\n LC_ALL LANGUAGE, XMODIFIERS]\n AuthorizedKeysFile: .ssh/authorized_keys\n ChallengeResponseAuthentication: \'no\'\n GSSAPIAuthentication: \'yes\'\n GSSAPICleanupCredentials: \'no\'\n HostKey: [/etc/ssh/ssh_host_rsa_key, /etc/ssh/ssh_host_ecdsa_key, /etc/ssh/ssh_host_ed25519_key]\n PasswordAuthentication: \'no\'\n Subsystem: sftp /usr/libexec/openssh/sftp-server\n SyslogFacility: AUTHPRIV\n UseDNS: \'no\'\n UsePAM: \'yes\'\n UsePrivilegeSeparation: sandbox\n X11Forwarding: \'yes\'\n tripleo::profile::base::swift::proxy::ceilometer_enabled: false\n tripleo::profile::base::swift::proxy::ceilometer_messaging_use_ssl: \'False\'\n tripleo::profile::base::swift::proxy::rabbit_port: 5672\n tripleo::profile::base::swift::proxy::tls_proxy_bind_ip: storage\n tripleo::profile::base::swift::proxy::tls_proxy_fqdn: \'%{hiera(\'\'fqdn_storage\'\')}\'\n tripleo::profile::base::swift::proxy::tls_proxy_port: \'8080\'\n tripleo::profile::base::swift::ringbuilder::build_ring: true\n tripleo::profile::base::swift::ringbuilder::min_part_hours: 1\n tripleo::profile::base::swift::ringbuilder::part_power: 10\n tripleo::profile::base::swift::ringbuilder::raw_disk_prefix: r1z1-\n tripleo::profile::base::swift::ringbuilder::raw_disks: [\':%PORT%/d1\']\n tripleo::profile::base::swift::ringbuilder::replicas: 3\n tripleo::profile::base::swift::ringbuilder::swift_ring_get_tempurl: https://192.168.24.2:13808/v1/AUTH_aed387cf82184fb788209f67beef84fe/overcloud-swift-rings/swift-rings.tar.gz?temp_url_sig=4e0dc6e89355a285170099963795538fe44f9487&temp_url_expires=1532600992\n tripleo::profile::base::swift::ringbuilder::swift_ring_put_tempurl: https://192.168.24.2:13808/v1/AUTH_aed387cf82184fb788209f67beef84fe/overcloud-swift-rings/swift-rings.tar.gz?temp_url_sig=e22872dbc58b3effeecaad0f803ae39c074aa8bb&temp_url_expires=1532601021\n tripleo::profile::base::swift::ringbuilder:skip_consistency_check: true\n tripleo::profile::base::swift::storage::enable_swift_storage: true\n tripleo::profile::base::swift::storage::use_local_dir: true\n tripleo::profile::base::tuned::profile: \'\'\n tripleo::profile::pacemaker::cinder::volume_bundle::cinder_volume_docker_image: 192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest\n tripleo::profile::pacemaker::cinder::volume_bundle::docker_environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n tripleo::profile::pacemaker::cinder::volume_bundle::docker_volumes: [\'/etc/hosts:/etc/hosts:ro\',\n \'/etc/localtime:/etc/localtime:ro\', \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\',\n \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\', \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\', \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\',\n \'/etc/puppet:/etc/puppet:ro\', \'/var/lib/kolla/config_files/cinder_volume.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro\',\n \'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro\', \'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro\',\n \'/lib/modules:/lib/modules:ro\', \'/dev/:/dev/\', \'/run/:/run/\', \'/sys:/sys\',\n \'/var/lib/cinder:/var/lib/cinder\', \'/var/log/containers/cinder:/var/log/cinder\']\n tripleo::profile::pacemaker::database::mysql::bind_address: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n tripleo::profile::pacemaker::database::mysql::ca_file: /etc/ipa/ca.crt\n tripleo::profile::pacemaker::database::mysql::gmcast_listen_addr: internal_api\n tripleo::profile::pacemaker::database::mysql_bundle::bind_address: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n tripleo::profile::pacemaker::database::mysql_bundle::control_port: 3123\n tripleo::profile::pacemaker::database::mysql_bundle::mysql_docker_image: 192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest\n tripleo::profile::pacemaker::database::redis_bundle::control_port: 3124\n tripleo::profile::pacemaker::database::redis_bundle::redis_docker_image: 192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest\n tripleo::profile::pacemaker::database::redis_bundle::tls_proxy_bind_ip: internal_api\n tripleo::profile::pacemaker::database::redis_bundle::tls_proxy_fqdn: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n tripleo::profile::pacemaker::database::redis_bundle::tls_proxy_port: 6379\n tripleo::profile::pacemaker::haproxy_bundle::haproxy_docker_image: 192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest\n tripleo::profile::pacemaker::haproxy_bundle::internal_certs_directory: /etc/pki/tls/certs/haproxy\n tripleo::profile::pacemaker::haproxy_bundle::internal_keys_directory: /etc/pki/tls/private/haproxy\n tripleo::profile::pacemaker::haproxy_bundle::tls_mapping: [/etc/ipa/ca.crt,\n /etc/pki/tls/private/haproxy, /etc/pki/tls/certs/haproxy, /etc/pki/tls/private/overcloud_endpoint.pem]\n tripleo::profile::pacemaker::rabbitmq_bundle::control_port: 3122\n tripleo::profile::pacemaker::rabbitmq_bundle::rabbitmq_docker_image: 192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest\n tripleo::stunnel::foreground: \'yes\'\n tripleo::stunnel::manage_service: false\n tripleo::trusted_cas::ca_map: {}\n vswitch::dpdk::driver_type: vfio-pci\n vswitch::dpdk::host_core_list: \'\'\n vswitch::dpdk::memory_channels: \'4\'\n vswitch::dpdk::pmd_core_list: \'\'\n vswitch::dpdk::socket_mem: \'\'\n vswitch::ovs::enable_hw_offload: false\n role_data_monitoring_subscriptions: [overcloud-pacemaker]\n role_data_post_update_tasks:\n - block:\n - name: store update level to update_level variable\n set_fact: {odl_update_level: 1}\n - block:\n - {name: Disable Upgrade Flag via Rest, shell: \'curl -k -v --silent --fail -u\n ODL_USERNAME:redhat -X \\ PUT -d "{ "config": { "upgradeInProgress": false\n } }" \\ -H "Content-Type: application/json" \\ http://:8081/restconf/config/genius-mdsalutil:config\',\n when: step|int == 0}\n - copy: {content: "<config xmlns=\\"urn:opendaylight:params:xml:ns:yang:mdsalutil\\"\\\n >\\n <upgradeInProgress>false</upgradeInProgress>\\n</config>\\n", dest: /var/lib/config-data/puppet-generated/opendaylight/opt/opendaylight/etc/opendaylight/datastore/initial/config/genius-mdsalutil-config.xml,\n group: 42462, mode: 420, owner: 42462}\n name: Disable Upgrade in Config File\n when: step|int == 0\n when: odl_update_level == 2\n - block:\n - {command: systemctl is-active --quiet openvswitch, name: Check service openvswitch\n is running, register: openvswitch_running, tags: common}\n - {name: Delete OVS groups and ports, shell: sudo ovs-ofctl -O Openflow13 del-groups\n br-int; for tun_port in $(ovs-vsctl list-ports br-int | grep \'tun\'); do;\n ovs-vsctl del-port br-int $(tun_port); done;, when: (step|int == 0) and\n (openvswitch_running.rc == 0)}\n - {name: Stop openvswitch service, service: name=openvswitch state=stopped,\n when: (step|int == 1) and (openvswitch_running.rc == 0)}\n - iptables: chain=OUTPUT action=insert protocol=tcp destination_port={{ item\n }} jump=DROP state=absent\n name: Unblock OVS port per compute node.\n when: step|int == 2\n with_items: [6640, 6653, 6633]\n - {name: start openvswitch service, service: name=openvswitch state=started,\n when: step|int == 3}\n when: odl_update_level == 2\n role_data_post_upgrade_tasks:\n - getent: {database: passwd, key: neutron}\n ignore_errors: true\n name: Check for neutron user\n - name: Set neutron_user_avail\n set_fact: {neutron_user_avail: \'{{ getent_passwd is defined }}\'}\n - block:\n - {become: true, name: Ensure read/write access for files created after upgrade,\n shell: \'umask 0002\n\n setfacl -d -R -m u:neutron:rwx /var/lib/neutron\n\n setfacl -R -m u:neutron:rw /var/lib/neutron\n\n find /var/lib/neutron -type d -exec setfacl -m u:neutron:rwx \'\'{}\'\' \\;\n\n \'}\n - become: true\n ignore_errors: true\n name: Provide access for domain sockets\n shell: \'umask 0002\n\n setfacl -m u:neutron:rwx "{{ item }}"\n\n \'\n with_items: [/var/lib/neutron/metadata_proxy, /var/lib/neutron]\n when: [step|int == 2, neutron_user_avail|bool]\n - {name: Disable Upgrade Flag via Rest, shell: \'curl -k -v --silent --fail -u\n ODL_USERNAME:redhat -X \\ PUT -d "{ "config": { "upgradeInProgress": false\n } }" \\ -H "Content-Type: application/json" \\ http://:8081/restconf/config/genius-mdsalutil:config\',\n when: step|int == 0}\n - copy: {content: "<config xmlns=\\"urn:opendaylight:params:xml:ns:yang:mdsalutil\\"\\\n >\\n <upgradeInProgress>false</upgradeInProgress>\\n</config>\\n", dest: /var/lib/config-data/puppet-generated/opendaylight/opt/opendaylight/etc/opendaylight/datastore/initial/config/genius-mdsalutil-config.xml,\n group: 42462, mode: 420, owner: 42462}\n name: Disable Upgrade in Config File\n when: step|int == 0\n - {command: systemctl is-active --quiet openvswitch, name: Check service openvswitch\n is running, register: openvswitch_running, tags: common}\n - {name: Delete OVS groups and ports, shell: sudo ovs-ofctl -O Openflow13 del-groups\n br-int; for tun_port in $(ovs-vsctl list-ports br-int | grep \'tun\'); do; ovs-vsctl\n del-port br-int $(tun_port); done;, when: (step|int == 0) and (openvswitch_running.rc\n == 0)}\n - {name: Stop openvswitch service, service: name=openvswitch state=stopped, when: (step|int\n == 1) and (openvswitch_running.rc == 0)}\n - iptables: chain=OUTPUT action=insert protocol=tcp destination_port={{ item }}\n jump=DROP state=absent\n name: Unblock OVS port per compute node.\n when: step|int == 2\n with_items: [6640, 6653, 6633]\n - {name: start openvswitch service, service: name=openvswitch state=started, when: step|int\n == 3}\n role_data_pre_upgrade_rolling_tasks: []\n role_data_puppet_config:\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-07-13.1\',\n config_volume: aodh, puppet_tags: \'aodh_api_paste_ini,aodh_config\', step_config: \'include\n tripleo::profile::base::aodh::api\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-07-13.1\',\n config_volume: aodh, puppet_tags: aodh_config, step_config: \'include tripleo::profile::base::aodh::evaluator\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-07-13.1\',\n config_volume: aodh, puppet_tags: aodh_config, step_config: \'include tripleo::profile::base::aodh::listener\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-07-13.1\',\n config_volume: aodh, puppet_tags: aodh_config, step_config: \'include tripleo::profile::base::aodh::notifier\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-07-13.1\',\n config_volume: ceilometer, puppet_tags: ceilometer_config, step_config: \'include\n ::tripleo::profile::base::ceilometer::agent::polling\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-07-13.1\',\n config_volume: ceilometer, puppet_tags: ceilometer_config, step_config: \'include\n ::tripleo::profile::base::ceilometer::agent::notification\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-07-13.1\',\n config_volume: cinder, puppet_tags: \'cinder_config,file,concat,file_line\', step_config: \'include\n ::tripleo::profile::base::cinder::api\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-07-13.1\',\n config_volume: cinder, puppet_tags: \'cinder_config,file,concat,file_line\', step_config: \'include\n ::tripleo::profile::base::cinder::scheduler\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-07-13.1\',\n config_volume: cinder, puppet_tags: \'cinder_config,file,concat,file_line\', step_config: \'include\n ::tripleo::profile::base::lvm\n\n include ::tripleo::profile::base::cinder::volume\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\', config_volume: clustercheck,\n puppet_tags: file, step_config: \'include ::tripleo::profile::pacemaker::clustercheck\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-glance-api:2018-07-13.1\',\n config_volume: glance_api, puppet_tags: \'glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config\',\n step_config: \'include ::tripleo::profile::base::glance::api\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-07-13.1\',\n config_volume: gnocchi, puppet_tags: \'gnocchi_api_paste_ini,gnocchi_config\',\n step_config: \'include ::tripleo::profile::base::gnocchi::api\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-07-13.1\',\n config_volume: gnocchi, puppet_tags: gnocchi_config, step_config: \'include ::tripleo::profile::base::gnocchi::metricd\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-07-13.1\',\n config_volume: gnocchi, puppet_tags: gnocchi_config, step_config: \'include ::tripleo::profile::base::gnocchi::statsd\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - config_image: 192.168.24.1:8787/rhosp13/openstack-haproxy:2018-07-13.1\n config_volume: haproxy\n puppet_tags: haproxy_config\n step_config: \'exec {\'\'wait-for-settle\'\': command => \'\'/bin/true\'\' }\n\n class tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef,\n $dport = undef, $sport = undef, $proto = undef, $action = undef, $state =\n undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef,\n $extras = undef){}\n\n [\'\'pcmk_bundle\'\', \'\'pcmk_resource\'\', \'\'pcmk_property\'\', \'\'pcmk_constraint\'\',\n \'\'pcmk_resource_default\'\'].each |String $val| { noop_resource($val) }\n\n include ::tripleo::profile::pacemaker::haproxy_bundle\'\n volumes: [\'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro\', \'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro\',\n \'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro\', \'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro\']\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-heat-api:2018-07-13.1\',\n config_volume: heat_api, puppet_tags: \'heat_config,file,concat,file_line\', step_config: \'include\n ::tripleo::profile::base::heat::api\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-heat-api-cfn:2018-07-13.1\',\n config_volume: heat_api_cfn, puppet_tags: \'heat_config,file,concat,file_line\',\n step_config: \'include ::tripleo::profile::base::heat::api_cfn\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-heat-api:2018-07-13.1\',\n config_volume: heat, puppet_tags: \'heat_config,file,concat,file_line\', step_config: \'include\n ::tripleo::profile::base::heat::engine\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-horizon:2018-07-13.1\', config_volume: horizon,\n puppet_tags: horizon_config, step_config: \'include ::tripleo::profile::base::horizon\n\n \'}\n - config_image: 192.168.24.1:8787/rhosp13/openstack-iscsid:2018-07-13.1\n config_volume: iscsid\n puppet_tags: iscsid_config\n step_config: include ::tripleo::profile::base::iscsid\n volumes: [\'/etc/iscsi:/etc/iscsi\']\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-keystone:2018-07-13.1\',\n config_volume: keystone, puppet_tags: \'keystone_config,keystone_domain_config\',\n step_config: \'[\'\'Keystone_user\'\', \'\'Keystone_endpoint\'\', \'\'Keystone_domain\'\',\n \'\'Keystone_tenant\'\', \'\'Keystone_user_role\'\', \'\'Keystone_role\'\', \'\'Keystone_service\'\'].each\n |String $val| { noop_resource($val) }\n\n include ::tripleo::profile::base::keystone\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-memcached:2018-07-13.1\',\n config_volume: memcached, puppet_tags: file, step_config: \'include ::tripleo::profile::base::memcached\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\', config_volume: mysql,\n puppet_tags: file, step_config: \'[\'\'Mysql_datadir\'\', \'\'Mysql_user\'\', \'\'Mysql_database\'\',\n \'\'Mysql_grant\'\', \'\'Mysql_plugin\'\'].each |String $val| { noop_resource($val)\n }\n\n exec {\'\'wait-for-settle\'\': command => \'\'/bin/true\'\' }\n\n include ::tripleo::profile::pacemaker::database::mysql_bundle\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-neutron-server-opendaylight:2018-07-13.1\',\n config_volume: neutron, puppet_tags: \'neutron_config,neutron_api_config\', step_config: \'include\n tripleo::profile::base::neutron::server\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-neutron-server-opendaylight:2018-07-13.1\',\n config_volume: neutron, puppet_tags: neutron_plugin_ml2, step_config: \'include\n ::tripleo::profile::base::neutron::plugins::ml2\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-neutron-server-opendaylight:2018-07-13.1\',\n config_volume: neutron, puppet_tags: \'neutron_config,neutron_dhcp_agent_config\',\n step_config: \'include tripleo::profile::base::neutron::dhcp\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-neutron-server-opendaylight:2018-07-13.1\',\n config_volume: neutron, puppet_tags: \'neutron_config,neutron_metadata_agent_config\',\n step_config: \'include tripleo::profile::base::neutron::metadata\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\',\n config_volume: nova, puppet_tags: nova_config, step_config: \'[\'\'Nova_cell_v2\'\'].each\n |String $val| { noop_resource($val) }\n\n include tripleo::profile::base::nova::api\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\',\n config_volume: nova, puppet_tags: nova_config, step_config: \'include tripleo::profile::base::nova::conductor\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\',\n config_volume: nova, puppet_tags: nova_config, step_config: \'include tripleo::profile::base::nova::consoleauth\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\',\n config_volume: nova, puppet_tags: nova_config, step_config: \'\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-placement-api:2018-07-13.1\',\n config_volume: nova_placement, puppet_tags: nova_config, step_config: \'include\n tripleo::profile::base::nova::placement\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\',\n config_volume: nova, puppet_tags: nova_config, step_config: \'include tripleo::profile::base::nova::scheduler\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\',\n config_volume: nova, puppet_tags: nova_config, step_config: \'include tripleo::profile::base::nova::vncproxy\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-cron:2018-07-13.1\', config_volume: crond,\n step_config: \'include ::tripleo::profile::base::logging::logrotate\'}\n - config_image: 192.168.24.1:8787/rhosp13/openstack-opendaylight:2018-07-13.1\n config_volume: opendaylight\n puppet_tags: odl_user,odl_keystore\n step_config: \'include tripleo::profile::base::neutron::opendaylight\n\n \'\n volumes: []\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-panko-api:2018-07-13.1\',\n config_volume: panko, puppet_tags: \'panko_api_paste_ini,panko_config\', step_config: \'include\n tripleo::profile::base::panko::api\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-07-13.1\',\n config_volume: rabbitmq, puppet_tags: file, step_config: \'[\'\'Rabbitmq_policy\'\',\n \'\'Rabbitmq_user\'\'].each |String $val| { noop_resource($val) }\n\n include ::tripleo::profile::base::rabbitmq\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-redis:2018-07-13.1\', config_volume: redis,\n puppet_tags: exec, step_config: \'include ::tripleo::profile::pacemaker::database::redis_bundle\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-07-13.1\',\n config_volume: swift, puppet_tags: \'swift_config,swift_proxy_config,swift_keymaster_config\',\n step_config: \'include ::tripleo::profile::base::swift::proxy\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-07-13.1\',\n config_volume: swift_ringbuilder, puppet_tags: \'exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball\',\n step_config: \'include ::tripleo::profile::base::swift::ringbuilder\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-07-13.1\',\n config_volume: swift, puppet_tags: \'swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server\',\n step_config: \'include ::tripleo::profile::base::swift::storage\n\n\n class xinetd() {}\'}\n role_data_service_config_settings: {}\n role_data_service_metadata_settings: null\n role_data_service_names: [aodh_api, aodh_evaluator, aodh_listener, aodh_notifier,\n ca_certs, ceilometer_api_disabled, ceilometer_collector_disabled, ceilometer_expirer_disabled,\n ceilometer_agent_central, ceilometer_agent_notification, cinder_api, cinder_scheduler,\n cinder_volume, clustercheck, docker, glance_api, glance_registry_disabled, gnocchi_api,\n gnocchi_metricd, gnocchi_statsd, haproxy, heat_api, heat_api_cloudwatch_disabled,\n heat_api_cfn, heat_engine, horizon, iscsid, kernel, keystone, memcached, mongodb_disabled,\n mysql, mysql_client, neutron_api, neutron_plugin_ml2_odl, neutron_dhcp, neutron_metadata,\n nova_api, nova_conductor, nova_consoleauth, nova_metadata, nova_placement, nova_scheduler,\n nova_vnc_proxy, ntp, logrotate_crond, opendaylight_api, opendaylight_ovs, pacemaker,\n panko_api, rabbitmq, redis, snmp, sshd, swift_proxy, swift_ringbuilder, swift_storage,\n timezone, tripleo_firewall, tripleo_packages, tuned]\n role_data_step_config: "# Copyright 2014 Red Hat, Inc.\\n# All Rights Reserved.\\n\\\n #\\n# Licensed under the Apache License, Version 2.0 (the \\"License\\"); you may\\n\\\n # not use this file except in compliance with the License. You may obtain\\n\\\n # a copy of the License at\\n#\\n# http://www.apache.org/licenses/LICENSE-2.0\\n\\\n #\\n# Unless required by applicable law or agreed to in writing, software\\n#\\\n \\ distributed under the License is distributed on an \\"AS IS\\" BASIS, WITHOUT\\n\\\n # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\\n\\\n # License for the specific language governing permissions and limitations\\n\\\n # under the License.\\n\\n# Common config, from tripleo-heat-templates/puppet/manifests/overcloud_common.pp\\n\\\n # The content of this file will be used to generate\\n# the puppet manifests\\\n \\ for all roles, the placeholder\\n# Controller will be replaced by \'controller\',\\\n \\ \'blockstorage\',\\n# \'cephstorage\' and all the deployed roles.\\n\\nif hiera(\'step\')\\\n \\ >= 4 {\\n hiera_include(\'Controller_classes\', [])\\n}\\n\\n$package_manifest_name\\\n \\ = join([\'/var/lib/tripleo/installed-packages/overcloud_Controller\', hiera(\'step\')])\\n\\\n package_manifest{$package_manifest_name: ensure => present}\\n\\n# End of overcloud_common.pp\\n\\\n \\ninclude ::tripleo::trusted_cas\\ninclude ::tripleo::profile::base::docker\\n\\\n \\ninclude ::tripleo::profile::base::kernel\\ninclude ::tripleo::profile::base::database::mysql::client\\n\\\n include ::tripleo::profile::base::time::ntp\\ninclude tripleo::profile::base::neutron::plugins::ovs::opendaylight\\n\\\n \\ninclude ::tripleo::profile::base::pacemaker\\n\\ninclude ::tripleo::profile::base::snmp\\n\\\n \\ninclude ::tripleo::profile::base::sshd\\n\\ninclude ::timezone\\ninclude ::tripleo::firewall\\n\\\n \\ninclude ::tripleo::packages\\n\\ninclude ::tripleo::profile::base::tuned"\n role_data_update_tasks:\n - block:\n - name: Get docker Cinder-Volume image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest\'}\n - {name: Get previous Cinder-Volume image id, register: cinder_volume_image_id,\n shell: \'docker images | awk \'\'/cinder-volume.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Cinder-Volume image, register: cinder_volume_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{cinder_volume_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Cinder-Volume image, shell: \'docker\n rm -fv {{item}}\', with_items: \'{{ cinder_volume_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Cinder-Volume images, shell: \'docker rmi -f {{cinder_volume_image_id.stdout}}\'}\n when: [cinder_volume_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Cinder-Volume\n images}\n - {name: Retag pcmklatest to latest Cinder-Volume image, shell: \'docker tag\n {{docker_image}} {{docker_image_latest}}\'}\n name: Cinder-Volume fetch and retag container image for pacemaker\n when: step|int == 2\n - block:\n - {failed_when: false, name: Detect if puppet on the docker profile would restart\n the service, register: puppet_docker_noop_output, shell: "puppet apply --noop\\\n \\ --summarize --detailed-exitcodes --verbose \\\\\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules\\\n \\ \\\\\\n --color=false -e \\"class { \'tripleo::profile::base::docker\': step\\\n \\ => 1, }\\" 2>&1 | \\\\\\nawk -F \\":\\" \'/Out of sync:/ { print $2}\'\\n"}\n - {changed_when: docker_check_update.rc == 100, failed_when: \'docker_check_update.rc\n not in [0, 100]\', name: Is docker going to be updated, register: docker_check_update,\n shell: yum check-update docker}\n - {name: Set docker_rpm_needs_update fact, set_fact: \'docker_rpm_needs_update={{\n docker_check_update.rc == 100 }}\'}\n - {name: Set puppet_docker_is_outofsync fact, set_fact: \'puppet_docker_is_outofsync={{\n puppet_docker_noop_output.stdout|trim|int >= 1 }}\'}\n - {name: Stop all containers, shell: docker ps -q | xargs --no-run-if-empty\n -n1 docker stop, when: puppet_docker_is_outofsync or docker_rpm_needs_update}\n - name: Stop docker\n service: {name: docker, state: stopped}\n when: puppet_docker_is_outofsync or docker_rpm_needs_update\n - {name: Update the docker package, when: docker_rpm_needs_update, yum: name=docker\n state=latest update_cache=yes}\n - {changed_when: puppet_docker_apply.rc == 2, failed_when: \'puppet_docker_apply.rc\n not in [0, 2]\', name: Apply puppet which will start the service again, register: puppet_docker_apply,\n shell: "puppet apply --detailed-exitcodes --verbose \\\\\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules\\\n \\ \\\\\\n -e \\"class { \'tripleo::profile::base::docker\': step => 1, }\\"\\n"}\n when: step|int == 2\n - block:\n - name: Check for haproxy Kolla configuration\n register: haproxy_kolla_config\n stat: {path: /var/lib/config-data/puppet-generated/haproxy}\n - name: Check if haproxy is already containerized\n set_fact: {haproxy_containerized: \'{{haproxy_kolla_config.stat.isdir | default(false)}}\'}\n - {command: hiera -c /etc/puppet/hiera.yaml bootstrap_nodeid, name: get bootstrap\n nodeid, register: bootstrap_node, tags: common}\n - {name: set is_bootstrap_node fact, set_fact: \'is_bootstrap_node={{bootstrap_node.stdout|lower\n == ansible_hostname|lower}}\', tags: common}\n name: Set HAProxy upgrade facts\n - block:\n - {command: \'cibadmin --query --xpath "//storage-mapping[@id=\'\'haproxy-cert\'\']"\',\n ignore_errors: true, name: Check haproxy public certificate configuration\n in pacemaker, register: haproxy_cert_mounted}\n - name: Disable the haproxy cluster resource\n pacemaker_resource: {resource: haproxy-bundle, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n when: haproxy_cert_mounted.rc == 6\n - name: Set HAProxy public cert volume mount fact\n set_fact: {haproxy_public_cert_path: /etc/pki/tls/private/overcloud_endpoint.pem,\n haproxy_public_tls_enabled: false}\n - {command: \'pcs resource bundle update haproxy-bundle storage-map add id=haproxy-cert\n source-dir={{ haproxy_public_cert_path }} target-dir=/var/lib/kolla/config_files/src-tls/{{\n haproxy_public_cert_path }} options=ro\', name: Add a bind mount for public\n certificate in the haproxy bundle, when: haproxy_cert_mounted.rc == 6 and\n haproxy_public_tls_enabled|bool}\n - name: Enable the haproxy cluster resource\n pacemaker_resource: {resource: haproxy-bundle, state: enable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n when: haproxy_cert_mounted.rc == 6\n name: Mount TLS cert if needed\n when: [step|int == 1, haproxy_containerized|bool, is_bootstrap_node]\n - block:\n - name: Get docker Haproxy image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-haproxy:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest\'}\n - {name: Get previous Haproxy image id, register: haproxy_image_id, shell: \'docker\n images | awk \'\'/haproxy.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Haproxy image, register: haproxy_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{haproxy_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Haproxy image, shell: \'docker\n rm -fv {{item}}\', with_items: \'{{ haproxy_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Haproxy images, shell: \'docker rmi -f {{haproxy_image_id.stdout}}\'}\n when: [haproxy_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Haproxy images}\n - {name: Retag pcmklatest to latest Haproxy image, shell: \'docker tag {{docker_image}}\n {{docker_image_latest}}\'}\n name: Haproxy fetch and retag container image for pacemaker\n when: step|int == 2\n - block:\n - name: Get docker Mariadb image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest\'}\n - {name: Get previous Mariadb image id, register: mariadb_image_id, shell: \'docker\n images | awk \'\'/mariadb.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Mariadb image, register: mariadb_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{mariadb_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Mariadb image, shell: \'docker\n rm -fv {{item}}\', with_items: \'{{ mariadb_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Mariadb images, shell: \'docker rmi -f {{mariadb_image_id.stdout}}\'}\n when: [mariadb_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Mariadb images}\n - {name: Retag pcmklatest to latest Mariadb image, shell: \'docker tag {{docker_image}}\n {{docker_image_latest}}\'}\n name: Mariadb fetch and retag container image for pacemaker\n when: step|int == 2\n - block:\n - name: store update level to update_level variable\n set_fact: {odl_update_level: 1}\n name: Get ODL update level\n - block:\n - {failed_when: false, name: Check if ODL container is present, register: opendaylight_api_container_present,\n shell: \'docker ps -a --format \'\'{{ \'\'{{\'\' }}.Names{{ \'\'}}\'\' }}\'\' | grep \'\'^opendaylight_api$\'\'\'}\n - {name: Update ODL container restart policy to unless-stopped, shell: docker\n update --restart=unless-stopped opendaylight_api, when: opendaylight_api_container_present.rc\n == 0}\n - docker_container: {name: opendaylight_api, state: stopped}\n name: Stop previous ODL container\n - file: {path: /var/lib/opendaylight/data/cache, state: absent}\n name: Delete cache folder\n name: Stop ODL container and remove cache\n when: [step|int == 0, odl_update_level == 1]\n - block:\n - {failed_when: false, name: Check if ODL container is present, register: opendaylight_api_container_present,\n shell: \'docker ps -a --format \'\'{{ \'\'{{\'\' }}.Names{{ \'\'}}\'\' }}\'\' | grep \'\'^opendaylight_api$\'\'\'}\n - {name: Update ODL container restart policy to unless-stopped, shell: docker\n update --restart=unless-stopped opendaylight_api, when: opendaylight_api_container_present.rc\n == 0}\n - docker_container: {name: opendaylight_api, state: stopped}\n name: stop previous ODL container\n when: step|int == 0\n - file: {path: \'/var/lib/opendaylight/{{item}}\', state: absent}\n name: remove data, journal and snapshots\n when: step|int == 0\n with_items: [snapshots, journal, data]\n - copy: {content: "<config xmlns=\\"urn:opendaylight:params:xml:ns:yang:mdsalutil\\"\\\n >\\n <upgradeInProgress>true</upgradeInProgress>\\n</config>\\n", dest: /var/lib/config-data/puppet-generated/opendaylight/opt/opendaylight/etc/opendaylight/datastore/initial/config/genius-mdsalutil-config.xml,\n group: 42462, mode: 420, owner: 42462}\n name: Set ODL upgrade flag to True\n when: step|int == 1\n name: Run L2 update tasks that are similar to upgrade_tasks when update level\n is 2\n when: odl_update_level == 2\n - block:\n - iptables: chain=OUTPUT action=insert protocol=tcp destination_port={{ item\n }} jump=DROP\n name: Block connections to ODL.\n when: step|int == 0\n with_items: [6640, 6653, 6633]\n name: Run L2 update tasks that are similar to upgrade_tasks when update level\n is 2\n when: odl_update_level == 2\n - {async: 30, name: Check pacemaker cluster running before the minor update, pacemaker_cluster: state=online\n check_and_fail=true, poll: 4, when: step|int == 0}\n - {name: Stop pacemaker cluster, pacemaker_cluster: state=offline, when: step|int\n == 1}\n - {name: Start pacemaker cluster, pacemaker_cluster: state=online, when: step|int\n == 4}\n - block:\n - name: Get docker Rabbitmq image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest\'}\n - {name: Get previous Rabbitmq image id, register: rabbitmq_image_id, shell: \'docker\n images | awk \'\'/rabbitmq.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Rabbitmq image, register: rabbitmq_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{rabbitmq_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Rabbitmq image, shell: \'docker\n rm -fv {{item}}\', with_items: \'{{ rabbitmq_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Rabbitmq images, shell: \'docker rmi -f {{rabbitmq_image_id.stdout}}\'}\n when: [rabbitmq_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Rabbitmq images}\n - {name: Retag pcmklatest to latest Rabbitmq image, shell: \'docker tag {{docker_image}}\n {{docker_image_latest}}\'}\n name: Rabbit fetch and retag container image for pacemaker\n when: step|int == 2\n - block:\n - name: Get docker Redis image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-redis:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest\'}\n - {name: Get previous Redis image id, register: redis_image_id, shell: \'docker\n images | awk \'\'/redis.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Redis image, register: redis_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{redis_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Redis image, shell: \'docker rm\n -fv {{item}}\', with_items: \'{{ redis_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Redis images, shell: \'docker rmi -f {{redis_image_id.stdout}}\'}\n when: [redis_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Redis images}\n - {name: Retag pcmklatest to latest Redis image, shell: \'docker tag {{docker_image}}\n {{docker_image_latest}}\'}\n name: Redis fetch and retag container image for pacemaker\n when: step|int == 2\n - file: {path: /var/run/rsyncd.pid, state: absent}\n name: Ensure rsyncd pid file is absent\n - {name: Check for existing yum.pid, register: yum_pid_file, stat: path=/var/run/yum.pid,\n when: step|int == 0 or step|int == 3}\n - {fail: msg="ERROR existing yum.pid detected - can\'t continue! Please ensure\n there is no other package update process for the duration of the minor update\n worfklow. Exiting.", name: Exit if existing yum process, when: (step|int ==\n 0 or step|int == 3) and yum_pid_file.stat.exists}\n - {name: Update all packages, when: step == "3", yum: name=* state=latest update_cache=yes}\n role_data_upgrade_batch_tasks: []\n role_data_upgrade_tasks:\n - {ignore_errors: true, name: Check for aodh api service running under apache,\n register: httpd_enabled, shell: httpd -t -D DUMP_VHOSTS | grep -q aodh, tags: common}\n - {command: systemctl is-active --quiet httpd, ignore_errors: true, name: Check\n if httpd is running, register: httpd_running, tags: common}\n - name: \'PreUpgrade step0,validation: Check if aodh api is running\'\n shell: systemctl status \'httpd\' | grep -q aodh\n tags: validation\n when: [step|int == 0, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - name: Stop and disable aodh service (running under httpd)\n service: name=httpd state=stopped enabled=no\n when: [step|int == 2, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - name: Set fact for removal of openstack-aodh-api package\n set_fact: {remove_aodh_api_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-aodh-api package if operator requests it\n when: [step|int == 2, remove_aodh_api_package|bool]\n yum: name=openstack-aodh-api state=removed\n - {command: systemctl is-enabled --quiet openstack-aodh-evaluator, ignore_errors: true,\n name: Check if aodh_evaluator is deployed, register: aodh_evaluator_enabled,\n tags: common}\n - command: systemctl is-active --quiet openstack-aodh-evaluator\n name: \'PreUpgrade step0,validation: Check service openstack-aodh-evaluator is\n running\'\n tags: validation\n when: [step|int == 0, aodh_evaluator_enabled.rc == 0]\n - name: Stop and disable openstack-aodh-evaluator service\n service: name=openstack-aodh-evaluator.service state=stopped enabled=no\n when: [step|int == 2, aodh_evaluator_enabled.rc == 0]\n - name: Set fact for removal of openstack-aodh-evaluator package\n set_fact: {remove_aodh_evaluator_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-aodh-evaluator package if operator requests it\n when: [step|int == 2, remove_aodh_evaluator_package|bool]\n yum: name=openstack-aodh-evaluator state=removed\n - {command: systemctl is-enabled --quiet openstack-aodh-listener, ignore_errors: true,\n name: Check if aodh_listener is deployed, register: aodh_listener_enabled, tags: common}\n - command: systemctl is-active --quiet openstack-aodh-listener\n name: \'PreUpgrade step0,validation: Check service openstack-aodh-listener is\n running\'\n tags: validation\n when: [step|int == 0, aodh_listener_enabled.rc == 0]\n - name: Stop and disable openstack-aodh-listener service\n service: name=openstack-aodh-listener.service state=stopped enabled=no\n when: [step|int == 2, aodh_listener_enabled.rc == 0]\n - name: Set fact for removal of openstack-aodh-listener package\n set_fact: {remove_aodh_listener_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-aodh-listener package if operator requests it\n when: [step|int == 2, remove_aodh_listener_package|bool]\n yum: name=openstack-aodh-listener state=removed\n - {command: systemctl is-enabled --quiet openstack-aodh-notifier, ignore_errors: true,\n name: Check if aodh_notifier is deployed, register: aodh_notifier_enabled, tags: common}\n - command: systemctl is-active --quiet openstack-aodh-notifier\n name: \'PreUpgrade step0,validation: Check service openstack-aodh-notifier is\n running\'\n tags: validation\n when: [step|int == 0, aodh_notifier_enabled.rc == 0]\n - name: Stop and disable openstack-aodh-notifier service\n service: name=openstack-aodh-notifier.service state=stopped enabled=no\n when: [step|int == 2, aodh_notifier_enabled.rc == 0]\n - name: Set fact for removal of openstack-aodh-notifier package\n set_fact: {remove_aodh_notifier_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-aodh-notifier package if operator requests it\n when: [step|int == 2, remove_aodh_notifier_package|bool]\n yum: name=openstack-aodh-notifier state=removed\n - {command: systemctl is-enabled --quiet openstack-ceilometer-central, ignore_errors: true,\n name: Check if ceilometer_agent_central is deployed, register: ceilometer_agent_central_enabled,\n tags: common}\n - command: systemctl is-active --quiet openstack-ceilometer-central\n name: \'PreUpgrade step0,validation: Check service openstack-ceilometer-central\n is running\'\n tags: validation\n when: [step|int == 0, ceilometer_agent_central_enabled.rc == 0]\n - name: Stop and disable ceilometer agent central service\n service: name=openstack-ceilometer-central state=stopped enabled=no\n when: [step|int == 2, ceilometer_agent_central_enabled.rc == 0]\n - name: Set fact for removal of openstack-ceilometer-central package\n set_fact: {remove_ceilometer_central_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-ceilometer-central package if operator requests it\n when: [step|int == 2, remove_ceilometer_central_package|bool]\n yum: name=openstack-ceilometer-central state=removed\n - {command: systemctl is-enabled --quiet openstack-ceilometer-notification, ignore_errors: true,\n name: Check if ceilometer_agent_notification is deployed, register: ceilometer_agent_notification_enabled,\n tags: common}\n - command: systemctl is-active --quiet openstack-ceilometer-notification\n name: \'PreUpgrade step0,validation: Check service openstack-ceilometer-notification\n is running\'\n tags: validation\n when: [step|int == 0, ceilometer_agent_notification_enabled.rc == 0]\n - name: Stop and disable ceilometer agent notification service\n service: name=openstack-ceilometer-notification state=stopped enabled=no\n when: [step|int == 2, ceilometer_agent_notification_enabled.rc == 0]\n - name: Set fact for removal of openstack-ceilometer-notification package\n set_fact: {remove_ceilometer_notification_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-ceilometer-notification package if operator requests\n it\n when: [step|int == 2, remove_ceilometer_notification_package|bool]\n yum: name=openstack-ceilometer-notification state=removed\n - {command: systemctl is-enabled openstack-cinder-api, ignore_errors: true, name: Check\n is cinder_api is deployed, register: cinder_api_enabled, tags: common}\n - name: \'PreUpgrade step0,validation: Check service openstack-cinder-api is running\'\n shell: systemctl is-active --quiet openstack-cinder-api\n tags: validation\n when: [step|int == 0, cinder_api_enabled.rc == 0]\n - name: Stop and disable cinder_api service (pre-upgrade not under httpd)\n service: name=openstack-cinder-api state=stopped enabled=no\n when: [step|int == 2, cinder_api_enabled.rc == 0]\n - {ignore_errors: true, name: check for cinder_api running under apache (post\n upgrade), register: cinder_api_apache, shell: httpd -t -D DUMP_VHOSTS | grep\n -q cinder, when: step|int == 2}\n - name: Stop and disable cinder_api service\n service: name=httpd state=stopped enabled=no\n when: [step|int == 2, cinder_api_apache.rc == 0]\n - file: {path: /var/spool/cron/cinder, state: absent}\n name: remove old cinder cron jobs\n when: step|int == 2\n - name: Set fact for removal of httpd package\n set_fact: {remove_httpd_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove httpd package if operator requests it\n when: [step|int == 2, remove_httpd_package|bool]\n yum: name=httpd state=removed\n - {command: systemctl is-enabled openstack-cinder-scheduler, ignore_errors: true,\n name: Check if cinder_scheduler is deployed, register: cinder_scheduler_enabled,\n tags: common}\n - name: \'PreUpgrade step0,validation: Check service openstack-cinder-scheduler\n is running\'\n shell: systemctl is-active --quiet openstack-cinder-scheduler\n tags: validation\n when: [step|int == 0, cinder_scheduler_enabled.rc == 0]\n - name: Stop and disable cinder_scheduler service\n service: name=openstack-cinder-scheduler state=stopped enabled=no\n when: [step|int == 2, cinder_scheduler_enabled.rc == 0]\n - name: Set fact for removal of openstack-cinder package\n set_fact: {remove_cinder_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-cinder package if operator requests it\n when: [step|int == 2, remove_cinder_package|bool]\n yum: name=openstack-cinder state=removed\n - name: Get docker Cinder-Volume image\n set_fact: {cinder_volume_docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest\'}\n - {changed_when: false, command: \'grep \'\'^volume_driver[ \\t]*=\'\' /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf\',\n ignore_errors: true, name: Check for Cinder-Volume Kolla configuration, register: cinder_volume_kolla_config}\n - name: Check if Cinder-Volume is already containerized\n set_fact: {cinder_volume_containerized: \'{{cinder_volume_kolla_config|succeeded}}\'}\n - block:\n - {command: hiera -c /etc/puppet/hiera.yaml bootstrap_nodeid, name: get bootstrap\n nodeid, register: bootstrap_node, tags: common}\n - {name: set is_bootstrap_node fact, set_fact: \'is_bootstrap_node={{bootstrap_node.stdout|lower\n == ansible_hostname|lower}}\', tags: common}\n - ignore_errors: true\n name: Check cluster resource status\n pacemaker_resource: {check_mode: false, resource: openstack-cinder-volume,\n state: show}\n register: cinder_volume_res\n - block:\n - name: Disable the openstack-cinder-volume cluster resource\n pacemaker_resource: {resource: openstack-cinder-volume, state: disable,\n wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - name: Delete the stopped openstack-cinder-volume cluster resource.\n pacemaker_resource: {resource: openstack-cinder-volume, state: delete, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n when: (is_bootstrap_node) and (cinder_volume_res|succeeded)\n - {name: Disable cinder_volume service from boot, service: name=openstack-cinder-volume\n enabled=no}\n name: Cinder-Volume baremetal to container upgrade tasks\n when: [step|int == 1, not cinder_volume_containerized|bool]\n - block:\n - {name: Get cinder_volume image id currently used by pacemaker, register: cinder_volume_current_pcmklatest_id,\n shell: \'docker images | awk \'\'/cinder-volume.* pcmklatest/{print $3}\'\' | uniq\'}\n - {name: Temporarily tag the current cinder_volume image id with the upgraded\n image name, shell: \'docker tag {{cinder_volume_current_pcmklatest_id.stdout}}\n {{cinder_volume_docker_image_latest}}\'}\n name: Prepare the switch to new cinder_volume container image name in pacemaker\n when: [step|int == 0, cinder_volume_containerized|bool]\n - ignore_errors: true\n name: Check openstack-cinder-volume cluster resource status\n pacemaker_resource: {check_mode: false, resource: openstack-cinder-volume, state: show}\n register: cinder_volume_pcs_res\n - block:\n - name: Disable the cinder_volume cluster resource before container upgrade\n pacemaker_resource: {resource: openstack-cinder-volume, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - {command: \'pcs resource bundle update openstack-cinder-volume container image={{cinder_volume_docker_image_latest}}\',\n name: pcs resource bundle update cinder_volume for new container image name}\n - name: Enable the cinder_volume cluster resource\n pacemaker_resource: {resource: openstack-cinder-volume, state: enable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n when: null\n name: Update cinder_volume pcs resource bundle for new container image\n when: [step|int == 1, cinder_volume_containerized|bool, is_bootstrap_node, cinder_volume_pcs_res|succeeded]\n - block:\n - name: Get docker Cinder-Volume image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest\'}\n - {name: Get previous Cinder-Volume image id, register: cinder_volume_image_id,\n shell: \'docker images | awk \'\'/cinder-volume.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Cinder-Volume image, register: cinder_volume_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{cinder_volume_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Cinder-Volume image, shell: \'docker\n rm -fv {{item}}\', with_items: \'{{ cinder_volume_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Cinder-Volume images, shell: \'docker rmi -f {{cinder_volume_image_id.stdout}}\'}\n when: [cinder_volume_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Cinder-Volume\n images}\n - {name: Retag pcmklatest to latest Cinder-Volume image, shell: \'docker tag\n {{docker_image}} {{docker_image_latest}}\'}\n name: Retag the pacemaker image if containerized\n when: [step|int == 3, cinder_volume_containerized|bool]\n - {name: Install docker packages on upgrade if missing, when: step|int == 3, yum: name=docker\n state=latest}\n - {command: systemctl is-enabled --quiet openstack-glance-api, ignore_errors: true,\n name: Check if glance_api is deployed, register: glance_api_enabled, tags: common}\n - command: systemctl is-active --quiet openstack-glance-api\n name: \'PreUpgrade step0,validation: Check service openstack-glance-api is running\'\n tags: validation\n when: [step|int == 0, glance_api_enabled.rc == 0]\n - name: Stop and disable glance_api service\n service: name=openstack-glance-api state=stopped enabled=no\n when: [step|int == 2, glance_api_enabled.rc == 0]\n - name: Set fact for removal of openstack-glance package\n set_fact: {remove_glance_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-glance package if operator requests it\n when: [step|int == 2, remove_glance_package|bool]\n yum: name=openstack-glance state=removed\n - {name: Stop and disable glance_registry service on upgrade, service: name=openstack-glance-registry\n state=stopped enabled=no, when: step|int == 1}\n - {command: systemctl is-enabled --quiet openstack-gnocchi-api, ignore_errors: true,\n name: Check if gnocchi_api is deployed, register: gnocchi_api_enabled, tags: common}\n - {ignore_errors: true, name: Check for gnocchi_api running under apache, register: httpd_enabled,\n shell: httpd -t -D DUMP_VHOSTS | grep -q gnocchi, tags: common}\n - command: systemctl is-active --quiet openstack-gnocchi-api\n name: \'PreUpgrade step0,validation: Check service openstack-gnocchi-api is running\'\n tags: validation\n when: [step|int == 0, gnocchi_api_enabled.rc == 0, httpd_enabled.rc != 0]\n - name: Stop and disable gnocchi_api service\n service: name=openstack-gnocchi-api state=stopped enabled=no\n when: [step|int == 2, gnocchi_api_enabled.rc == 0, httpd_enabled.rc != 0]\n - {command: systemctl is-active --quiet httpd, ignore_errors: true, name: Check\n if httpd service is running, register: httpd_running, tags: common}\n - name: \'PreUpgrade step0,validation: Check if gnocchi_api_wsgi is running\'\n shell: systemctl status \'httpd\' | grep -q gnocchi\n tags: validation\n when: [step|int == 0, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - name: Stop and disable httpd service\n service: name=httpd state=stopped enabled=no\n when: [step|int == 2, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - {command: systemctl is-enabled --quiet openstack-gnocchi-metricd, ignore_errors: true,\n name: Check if gnocchi_metricd is deployed, register: gnocchi_metricd_enabled,\n tags: common}\n - command: systemctl is-active --quiet openstack-gnocchi-metricd\n name: \'PreUpgrade step0,validation: Check service openstack-gnocchi-metricd\n is running\'\n tags: validation\n when: [step|int == 0, gnocchi_metricd_enabled.rc == 0]\n - name: Stop and disable openstack-gnocchi-metricd service\n service: name=openstack-gnocchi-metricd.service state=stopped enabled=no\n when: [step|int == 2, gnocchi_metricd_enabled.rc == 0]\n - {command: systemctl is-enabled --quiet openstack-gnocchi-statsd, ignore_errors: true,\n name: Check if gnocchi_statsd is deployed, register: gnocchi_statsd_enabled,\n tags: common}\n - command: systemctl is-active --quiet openstack-gnocchi-statsd\n name: \'PreUpgrade step0,validation: Check service openstack-gnocchi-statsd is\n running\'\n tags: validation\n when: [step|int == 0, gnocchi_statsd_enabled.rc == 0]\n - name: Stop and disable openstack-gnocchi-statsd service\n service: name=openstack-gnocchi-statsd.service state=stopped enabled=no\n when: [step|int == 2, gnocchi_statsd_enabled.rc == 0]\n - name: Get docker haproxy image\n set_fact: {haproxy_docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest\'}\n - block:\n - name: Check for haproxy Kolla configuration\n register: haproxy_kolla_config\n stat: {path: /var/lib/config-data/puppet-generated/haproxy}\n - name: Check if haproxy is already containerized\n set_fact: {haproxy_containerized: \'{{haproxy_kolla_config.stat.isdir | default(false)}}\'}\n - {command: hiera -c /etc/puppet/hiera.yaml bootstrap_nodeid, name: get bootstrap\n nodeid, register: bootstrap_node, tags: common}\n - {name: set is_bootstrap_node fact, set_fact: \'is_bootstrap_node={{bootstrap_node.stdout|lower\n == ansible_hostname|lower}}\', tags: common}\n name: Set HAProxy upgrade facts\n - block:\n - ignore_errors: true\n name: Check cluster resource status\n pacemaker_resource: {check_mode: true, resource: haproxy, state: started}\n register: haproxy_res\n - block:\n - name: Disable the haproxy cluster resource.\n pacemaker_resource: {resource: haproxy, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - name: Delete the stopped haproxy cluster resource.\n pacemaker_resource: {resource: haproxy, state: delete, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n when: (is_bootstrap_node) and (haproxy_res|succeeded)\n name: haproxy baremetal to container upgrade tasks\n when: [step|int == 1, not haproxy_containerized|bool]\n - block:\n - {name: Get haproxy image id currently used by pacemaker, register: haproxy_current_pcmklatest_id,\n shell: \'docker images | awk \'\'/haproxy.* pcmklatest/{print $3}\'\' | uniq\'}\n - {name: Temporarily tag the current haproxy image id with the upgraded image\n name, shell: \'docker tag {{haproxy_current_pcmklatest_id.stdout}} {{haproxy_docker_image_latest}}\'}\n name: Prepare the switch to new haproxy container image name in pacemaker\n when: [step|int == 0, haproxy_containerized|bool]\n - ignore_errors: true\n name: Check haproxy-bundle cluster resource status\n pacemaker_resource: {check_mode: false, resource: haproxy-bundle, state: show}\n register: haproxy_pcs_res\n - block:\n - name: Disable the haproxy cluster resource before container upgrade\n pacemaker_resource: {resource: haproxy-bundle, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - block:\n - {command: \'cibadmin --query --xpath "//storage-mapping[@id=\'\'haproxy-var-lib\'\']"\',\n ignore_errors: true, name: Check haproxy stats socket configuration in pacemaker,\n register: haproxy_stats_exposed}\n - {command: \'cibadmin --query --xpath "//storage-mapping[@id=\'\'haproxy-cert\'\']"\',\n ignore_errors: true, name: Check haproxy public certificate configuration\n in pacemaker, register: haproxy_cert_mounted}\n - {command: pcs resource bundle update haproxy-bundle storage-map add id=haproxy-var-lib\n source-dir=/var/lib/haproxy target-dir=/var/lib/haproxy options=rw, name: Add\n a bind mount for stats socket in the haproxy bundle, when: haproxy_stats_exposed.rc\n == 6}\n - name: Set HAProxy public cert volume mount fact\n set_fact: {haproxy_public_cert_path: /etc/pki/tls/private/overcloud_endpoint.pem,\n haproxy_public_tls_enabled: false}\n - command: pcs resource bundle update haproxy-bundle storage-map add id=haproxy-cert\n source-dir={{ haproxy_public_cert_path }} target-dir=/var/lib/kolla/config_files/src-tls/{{\n haproxy_public_cert_path }} options=ro\n name: Add a bind mount for public certificate in the haproxy bundle\n when: [haproxy_cert_mounted.rc == 6, haproxy_public_tls_enabled|bool]\n name: Expose HAProxy stats socket on the host and mount TLS cert if needed\n - {command: \'pcs resource bundle update haproxy-bundle container image={{haproxy_docker_image_latest}}\',\n name: Update the haproxy bundle to use the new container image name}\n - name: Enable the haproxy cluster resource\n pacemaker_resource: {resource: haproxy-bundle, state: enable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n name: Update haproxy pcs resource bundle for new container image\n when: [step|int == 1, haproxy_containerized|bool, is_bootstrap_node, haproxy_pcs_res|succeeded]\n - block:\n - name: Get docker Haproxy image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-haproxy:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest\'}\n - {name: Get previous Haproxy image id, register: haproxy_image_id, shell: \'docker\n images | awk \'\'/haproxy.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Haproxy image, register: haproxy_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{haproxy_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Haproxy image, shell: \'docker\n rm -fv {{item}}\', with_items: \'{{ haproxy_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Haproxy images, shell: \'docker rmi -f {{haproxy_image_id.stdout}}\'}\n when: [haproxy_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Haproxy images}\n - {name: Retag pcmklatest to latest Haproxy image, shell: \'docker tag {{docker_image}}\n {{docker_image_latest}}\'}\n name: Retag the pacemaker image if containerized\n when: [step|int == 3, haproxy_containerized|bool]\n - {command: systemctl is-enabled --quiet openstack-heat-api, ignore_errors: true,\n name: Check if heat_api is deployed, register: heat_api_enabled, tags: common}\n - {ignore_errors: true, name: Check for heat_api running under apache, register: httpd_enabled,\n shell: httpd -t -D DUMP_VHOSTS | grep -q heat_api_wsgi, tags: common}\n - command: systemctl is-active --quiet openstack-heat-api\n name: \'PreUpgrade step0,validation: Check service openstack-heat-api is running\'\n tags: validation\n when: [step|int == 0, heat_api_enabled.rc == 0, httpd_enabled.rc != 0]\n - name: Stop and disable heat_api service (pre-upgrade not under httpd)\n service: name=openstack-heat-api state=stopped enabled=no\n when: [step|int == 2, heat_api_enabled.rc == 0, httpd_enabled.rc != 0]\n - name: \'PreUpgrade step0,validation: Check if heat_api_wsgi is running\'\n shell: systemctl status \'httpd\' | grep -q heat_api_wsgi\n tags: validation\n when: [step|int == 0, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - name: Stop heat_api service (running under httpd)\n service: name=httpd state=stopped\n when: [step|int == 2, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - file: {path: /var/spool/cron/heat, state: absent}\n name: remove old heat cron jobs\n when: step|int == 2\n - {command: systemctl is-enabled openstack-heat-api-cloudwatch, ignore_errors: true,\n name: Check if heat_api_cloudwatch is deployed, register: heat_api_cloudwatch_enabled,\n when: step|int == 1}\n - name: Stop and disable heat_api_cloudwatch service (pre-upgrade not under httpd)\n service: name=openstack-heat-api-cloudwatch state=stopped enabled=no\n when: [step|int == 1, heat_api_cloudwatch_enabled.rc == 0]\n - {command: systemctl is-enabled --quiet openstack-heat-api-cfn, ignore_errors: true,\n name: Check if heat_api_cfn is deployed, register: heat_api_cfn_enabled, tags: common}\n - {ignore_errors: true, name: Check for heat_api_cfn running under apache, register: httpd_enabled,\n shell: httpd -t -D DUMP_VHOSTS | grep -q heat_api_cfn_wsgi, tags: common}\n - command: systemctl is-active --quiet openstack-heat-api-cfn\n name: \'PreUpgrade step0,validation: Check service openstack-heat-api-cfn is\n running\'\n tags: validation\n when: [step|int == 0, heat_api_cfn_enabled.rc == 0, httpd_enabled.rc != 0]\n - name: Stop and disable heat_api_cfn service (pre-upgrade not under httpd)\n service: name=openstack-heat-api-cfn state=stopped enabled=no\n when: [step|int == 2, heat_api_cfn_enabled.rc == 0, httpd_enabled.rc != 0]\n - name: \'PreUpgrade step0,validation: Check if heat_api_cfn_wsgi is running\'\n shell: systemctl status \'httpd\' | grep -q heat_api_cfn_wsgi\n tags: validation\n when: [step|int == 0, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - name: Stop heat_api_cfn service (running under httpd)\n service: name=httpd state=stopped\n when: [step|int == 2, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - {command: systemctl is-enabled --quiet openstack-heat-engine, ignore_errors: true,\n name: Check if heat_engine is deployed, register: heat_engine_enabled, tags: common}\n - command: systemctl is-active --quiet openstack-heat-engine\n name: \'PreUpgrade step0,validation: Check service openstack-heat-engine is running\'\n tags: validation\n when: [step|int == 0, heat_engine_enabled.rc == 0]\n - name: Stop and disable heat_engine service\n service: name=openstack-heat-engine state=stopped enabled=no\n when: [step|int == 2, heat_engine_enabled.rc == 0]\n - {ignore_errors: true, name: Check for horizon running under apache, register: httpd_enabled,\n shell: httpd -t -D DUMP_VHOSTS | grep -q horizon_vhost, tags: common}\n - name: \'PreUpgrade step0,validation: Check if horizon is running\'\n shell: systemctl is-active --quiet httpd\n tags: validation\n when: [step|int == 0, httpd_enabled.rc == 0]\n - name: Stop and disable horizon service (running under httpd)\n service: name=httpd state=stopped enabled=no\n when: [step|int == 2, httpd_enabled.rc == 0]\n - {command: systemctl is-enabled --quiet iscsid, ignore_errors: true, name: Check\n if iscsid service is deployed, register: iscsid_enabled, tags: common}\n - command: systemctl is-active --quiet iscsid\n name: \'PreUpgrade step0,validation: Check if iscsid is running\'\n tags: validation\n when: [step|int == 0, iscsid_enabled.rc == 0]\n - name: Stop and disable iscsid service\n service: name=iscsid state=stopped enabled=no\n when: [step|int == 2, iscsid_enabled.rc == 0]\n - {command: systemctl is-enabled --quiet iscsid.socket, ignore_errors: true, name: Check\n if iscsid.socket service is deployed, register: iscsid_socket_enabled, tags: common}\n - command: systemctl is-active --quiet iscsid.socket\n name: \'PreUpgrade step0,validation: Check if iscsid.socket is running\'\n tags: validation\n when: [step|int == 0, iscsid_socket_enabled.rc == 0]\n - name: Stop and disable iscsid.socket service\n service: name=iscsid.socket state=stopped enabled=no\n when: [step|int == 2, iscsid_socket_enabled.rc == 0]\n - {ignore_errors: true, name: Check for keystone running under apache, register: httpd_enabled,\n shell: httpd -t -D DUMP_VHOSTS | grep -q keystone_wsgi, tags: common}\n - name: \'PreUpgrade step0,validation: Check if keystone_wsgi is running under\n httpd\'\n shell: systemctl status \'httpd\' | grep -q keystone\n tags: validation\n when: [step|int == 0, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - name: Stop and disable keystone service (running under httpd)\n service: name=httpd state=stopped enabled=no\n when: [step|int == 2, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - file: {path: /var/spool/cron/keystone, state: absent}\n name: remove old keystone cron jobs\n when: step|int == 2\n - {command: systemctl is-enabled --quiet memcached, ignore_errors: true, name: Check\n if memcached is deployed, register: memcached_enabled, tags: common}\n - command: systemctl is-active --quiet memcached\n name: \'PreUpgrade step0,validation: Check service memcached is running\'\n tags: validation\n when: [step|int == 0, memcached_enabled.rc == 0]\n - name: Stop and disable memcached service\n service: name=memcached state=stopped enabled=no\n when: [step|int == 2, memcached_enabled.rc == 0]\n - {name: Check for mongodb service, register: mongod_service, stat: path=/usr/lib/systemd/system/mongod.service,\n tags: common}\n - name: Stop and disable mongodb service on upgrade\n service: name=mongod state=stopped enabled=no\n when: [step|int == 1, mongod_service.stat.exists]\n - name: Get docker Mysql image\n set_fact: {mysql_docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest\'}\n - name: Check for Mysql Kolla configuration\n register: mysql_kolla_config\n stat: {path: /var/lib/config-data/puppet-generated/mysql}\n - name: Check if Mysql is already containerized\n set_fact: {mysql_containerized: \'{{mysql_kolla_config.stat.isdir | default(false)}}\'}\n - {command: hiera -c /etc/puppet/hiera.yaml bootstrap_nodeid, name: get bootstrap\n nodeid, register: bootstrap_node, tags: common}\n - {name: set is_bootstrap_node fact, set_fact: \'is_bootstrap_node={{bootstrap_node.stdout|lower\n == ansible_hostname|lower}}\', tags: common}\n - block:\n - ignore_errors: true\n name: Check cluster resource status\n pacemaker_resource: {check_mode: true, resource: galera, state: master}\n register: galera_res\n - block:\n - name: Disable the galera cluster resource\n pacemaker_resource: {resource: galera, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - name: Delete the stopped galera cluster resource.\n pacemaker_resource: {resource: galera, state: delete, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n when: (is_bootstrap_node) and (galera_res|succeeded)\n - {name: Disable mysql service, service: name=mariadb enabled=no}\n - {file: state=absent path=/etc/xinetd.d/galera-monitor, name: Remove clustercheck\n service from xinetd}\n - {name: Restart xinetd service after clustercheck removal, service: name=xinetd\n state=restarted}\n name: Mysql baremetal to container upgrade tasks\n when: [step|int == 1, not mysql_containerized|bool]\n - block:\n - {name: Get galera image id currently used by pacemaker, register: galera_current_pcmklatest_id,\n shell: \'docker images | awk \'\'/mariadb.* pcmklatest/{print $3}\'\' | uniq\'}\n - {name: Temporarily tag the current galera image id with the upgraded image\n name, shell: \'docker tag {{galera_current_pcmklatest_id.stdout}} {{mysql_docker_image_latest}}\'}\n name: Prepare the switch to new galera container image name in pacemaker\n when: [step|int == 0, mysql_containerized|bool]\n - ignore_errors: true\n name: Check galera cluster resource status\n pacemaker_resource: {check_mode: false, resource: galera, state: show}\n register: galera_pcs_res\n - block:\n - name: Disable the galera cluster resource before container upgrade\n pacemaker_resource: {resource: galera, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - block:\n - {command: \'cibadmin --query --xpath "//storage-mapping[@id=\'\'mysql-log\'\']"\',\n ignore_errors: true, name: Check Mysql logging configuration in pacemaker,\n register: mysql_logs_moved}\n - block:\n - {command: pcs resource bundle update galera-bundle storage-map add id=mysql-log\n source-dir=/var/log/containers/mysql target-dir=/var/log/mysql options=rw,\n name: Add a bind mount for logging in the galera bundle}\n - {command: pcs resource update galera log=/var/log/mysql/mysqld.log, name: Reconfigure\n Mysql log file in the galera resource agent}\n name: Change Mysql logging configuration in pacemaker\n when: mysql_logs_moved.rc == 6\n name: Move Mysql logging to /var/log/containers\n - {command: \'pcs resource bundle update galera-bundle container image={{mysql_docker_image_latest}}\',\n name: Update the galera bundle to use the new container image name}\n - name: Enable the galera cluster resource\n pacemaker_resource: {resource: galera, state: enable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n name: Update galera pcs resource bundle for new container image\n when: [step|int == 1, mysql_containerized|bool, is_bootstrap_node, galera_pcs_res|succeeded]\n - block:\n - name: Get docker Mariadb image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest\'}\n - {name: Get previous Mariadb image id, register: mariadb_image_id, shell: \'docker\n images | awk \'\'/mariadb.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Mariadb image, register: mariadb_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{mariadb_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Mariadb image, shell: \'docker\n rm -fv {{item}}\', with_items: \'{{ mariadb_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Mariadb images, shell: \'docker rmi -f {{mariadb_image_id.stdout}}\'}\n when: [mariadb_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Mariadb images}\n - {name: Retag pcmklatest to latest Mariadb image, shell: \'docker tag {{docker_image}}\n {{docker_image_latest}}\'}\n name: Retag the pacemaker image if containerized\n when: [step|int == 3, mysql_containerized|bool]\n - block:\n - {name: Update host mariadb packages, when: step|int == 3, yum: name=mariadb-server-galera\n state=latest}\n - name: Mysql upgrade script\n set_fact: {mysql_upgrade_script: \'{% if mysql_containerized %}kolla_set_configs;\n {% endif %} chown -R mysql:mysql /var/lib/mysql; mysqld_safe --user=mysql\n --wsrep-provider=none --skip-networking --wsrep-on=off & timeout 60 sh\n -c \'\'while ! mysqladmin ping --silent; do sleep 1; done\'\'; mysql_upgrade;\n mysqladmin shutdown\'}\n - name: Bind mounts for temporary container\n set_fact:\n mysql_upgrade_db_bind_mounts: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json\',\n \'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro\',\n \'/var/lib/mysql:/var/lib/mysql\']\n - {name: Upgrade Mysql database from a temporary container, shell: \'/usr/bin/docker\n run --rm --log-driver=syslog -u root --net=host -e "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"\n -v {{ mysql_upgrade_db_bind_mounts | union([\'\'/tmp/mariadb-upgrade:/var/log/mariadb:rw\'\'])\n | join(\'\' -v \'\')}} "192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest"\n /bin/bash -ecx "{{mysql_upgrade_script}}"\', when: mysql_containerized|bool}\n - {name: Upgrade Mysql database from the host, shell: \'/bin/bash -ecx "{{mysql_upgrade_script}}"\',\n when: not mysql_containerized|bool}\n name: Check and upgrade Mysql database after major version upgrade\n when: step|int == 3\n - {command: systemctl is-enabled --quiet neutron-server, ignore_errors: true,\n name: Check if neutron_server is deployed, register: neutron_server_enabled,\n tags: common}\n - command: systemctl is-active --quiet neutron-server\n name: \'PreUpgrade step0,validation: Check service neutron-server is running\'\n tags: validation\n when: [step|int == 0, neutron_server_enabled.rc == 0]\n - name: Stop and disable neutron_api service\n service: name=neutron-server state=stopped enabled=no\n when: [step|int == 2, neutron_server_enabled.rc == 0]\n - name: Set fact for removal of openstack-neutron package\n set_fact: {remove_neutron_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-neutron package if operator requests it\n when: [step|int == 2, remove_neutron_package|bool]\n yum: name=openstack-neutron state=removed\n - {command: systemctl is-enabled --quiet neutron-dhcp-agent, ignore_errors: true,\n name: Check if neutron_dhcp_agent is deployed, register: neutron_dhcp_agent_enabled,\n tags: common}\n - command: systemctl is-active --quiet neutron-dhcp-agent\n name: \'PreUpgrade step0,validation: Check service neutron-dhcp-agent is running\'\n tags: validation\n when: [step|int == 0, neutron_dhcp_agent_enabled.rc == 0]\n - name: Stop and disable neutron_dhcp service\n service: name=neutron-dhcp-agent state=stopped enabled=no\n when: [step|int == 2, neutron_dhcp_agent_enabled.rc == 0]\n - {command: systemctl is-enabled --quiet neutron-metadata-agent, ignore_errors: true,\n name: Check if neutron_metadata_agent is deployed, register: neutron_metadata_agent_enabled,\n tags: common}\n - command: systemctl is-active --quiet neutron-metadata-agent\n name: \'PreUpgrade step0,validation: Check service neutron-metadata-agent is\n running\'\n tags: validation\n when: [step|int == 0, neutron_metadata_agent_enabled.rc == 0]\n - name: Stop and disable neutron_metadata service\n service: name=neutron-metadata-agent state=stopped enabled=no\n when: [step|int == 2, neutron_metadata_agent_enabled.rc == 0]\n - {command: systemctl is-enabled --quiet openstack-nova-api, ignore_errors: true,\n name: Check if nova_api is deployed, register: nova_api_enabled, tags: common}\n - {ignore_errors: true, name: Check for nova-api running under apache, register: httpd_enabled,\n shell: httpd -t -D DUMP_VHOSTS | grep -q \'nova\', tags: common}\n - command: systemctl is-active --quiet openstack-nova-api\n name: \'PreUpgrade step0,validation: Check service openstack-nova-api is running\'\n tags: validation\n when: [step|int == 0, nova_api_enabled.rc == 0, httpd_enabled.rc != 0]\n - name: Stop and disable nova_api service\n service: name=openstack-nova-api state=stopped enabled=no\n when: [step|int == 2, nova_api_enabled.rc == 0, httpd_enabled.rc != 0]\n - name: \'PreUpgrade step0,validation: Check if nova_wsgi is running\'\n shell: systemctl status \'httpd\' | grep -q \'nova\'\n tags: validation\n when: [step|int == 0, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - name: Stop nova_api service (running under httpd)\n service: name=httpd state=stopped\n when: [step|int == 2, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - name: Set fact for removal of openstack-nova-api package\n set_fact: {remove_nova_api_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-nova-api package if operator requests it\n when: [step|int == 2, remove_nova_api_package|bool]\n yum: name=openstack-nova-api state=removed\n - file: {path: /var/spool/cron/nova, state: absent}\n name: remove old nova cron jobs\n when: step|int == 2\n - {command: systemctl is-enabled --quiet openstack-nova-conductor, ignore_errors: true,\n name: Check if nova_conductor is deployed, register: nova_conductor_enabled,\n tags: common}\n - {ini_file: dest=/etc/nova/nova.conf section=upgrade_levels option=compute value=,\n name: Set compute upgrade level to auto, when: step|int == 1}\n - command: systemctl is-active --quiet openstack-nova-conductor\n name: \'PreUpgrade step0,validation: Check service openstack-nova-conductor is\n running\'\n tags: validation\n when: [step|int == 0, nova_conductor_enabled.rc == 0]\n - name: Stop and disable nova_conductor service\n service: name=openstack-nova-conductor state=stopped enabled=no\n when: [step|int == 2, nova_conductor_enabled.rc == 0]\n - name: Set fact for removal of openstack-nova-conductor package\n set_fact: {remove_nova_conductor_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-nova-conductor package if operator requests it\n when: [step|int == 2, remove_nova_conductor_package|bool]\n yum: name=openstack-nova-conductor state=removed\n - {command: systemctl is-enabled --quiet openstack-nova-consoleauth, ignore_errors: true,\n name: Check if nova_consoleauth is deployed, register: nova_consoleauth_enabled,\n tags: common}\n - command: systemctl is-active --quiet openstack-nova-consoleauth\n name: \'PreUpgrade step0,validation: Check service openstack-nova-consoleauth\n is running\'\n tags: validation\n when: [step|int == 0, nova_consoleauth_enabled.rc == 0]\n - name: Stop and disable nova_consoleauth service\n service: name=openstack-nova-consoleauth state=stopped enabled=no\n when: [step|int == 2, nova_consoleauth_enabled.rc == 0]\n - name: Set fact for removal of openstack-nova-console package\n set_fact: {remove_nova_console_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-nova-console package if operator requests it\n when: [step|int == 2, remove_nova_console_package|bool]\n yum: name=openstack-nova-console state=removed\n - {command: systemctl is-enabled --quiet openstack-nova-api, ignore_errors: true,\n name: Check if nova_api_metadata is deployed, register: nova_metadata_enabled,\n tags: common}\n - command: systemctl is-active --quiet openstack-nova-api\n name: \'PreUpgrade step0,validation: Check service openstack-nova-api is running\'\n tags: validation\n when: [step|int == 0, nova_metadata_enabled.rc == 0]\n - name: Stop and disable nova_api service\n service: name=openstack-nova-api state=stopped enabled=no\n when: [step|int == 2, nova_metadata_enabled.rc == 0]\n - {ignore_errors: true, name: Check for nova placement running under apache, register: httpd_enabled,\n shell: httpd -t -D DUMP_VHOSTS | grep -q placement_wsgi, tags: common}\n - name: \'PreUpgrade step0,validation: Check if placement_wsgi is running\'\n shell: systemctl status \'httpd\' | grep -q placement_wsgi\n tags: validation\n when: [step|int == 0, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - name: Stop and disable nova_placement service (running under httpd)\n service: name=httpd state=stopped enabled=no\n when: [step|int == 2, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - {command: systemctl is-enabled --quiet openstack-nova-scheduler, ignore_errors: true,\n name: Check if nova_scheduler is deployed, register: nova_scheduler_enabled,\n tags: common}\n - command: systemctl is-active --quiet openstack-nova-scheduler\n name: \'PreUpgrade step0,validation: Check service openstack-nova-scheduler is\n running\'\n tags: validation\n when: [step|int == 0, nova_scheduler_enabled.rc == 0]\n - name: Stop and disable nova_scheduler service\n service: name=openstack-nova-scheduler state=stopped enabled=no\n when: [step|int == 2, nova_scheduler_enabled.rc == 0]\n - name: Set fact for removal of openstack-nova-scheduler package\n set_fact: {remove_nova_scheduler_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-nova-scheduler package if operator requests it\n when: [step|int == 2, remove_nova_scheduler_package|bool]\n yum: name=openstack-nova-scheduler state=removed\n - {command: systemctl is-enabled --quiet openstack-nova-novncproxy, ignore_errors: true,\n name: Check if nova vncproxy is deployed, register: nova_vncproxy_enabled, tags: common}\n - command: systemctl is-active --quiet openstack-nova-novncproxy\n name: \'PreUpgrade step0,validation: Check service openstack-nova-novncproxy\n is running\'\n tags: validation\n when: [step|int == 0, nova_vncproxy_enabled.rc == 0]\n - name: Stop and disable nova_vnc_proxy service\n service: name=openstack-nova-novncproxy state=stopped enabled=no\n when: [step|int == 2, nova_vncproxy_enabled.rc == 0]\n - name: Set fact for removal of openstack-nova-novncproxy package\n set_fact: {remove_nova_novncproxy_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-nova-novncproxy package if operator requests it\n when: [step|int == 2, remove_nova_novncproxy_package|bool]\n yum: name=openstack-nova-novncproxy state=removed\n - {command: systemctl is-enabled --quiet opendaylight, ignore_errors: true, name: Check\n if opendaylight is deployed, register: opendaylight_enabled, tags: common}\n - command: systemctl is-active --quiet opendaylight\n name: \'PreUpgrade step0,validation: Check service opendaylight is running\'\n tags: validation\n when: [step|int == 0, opendaylight_enabled.rc == 0]\n - name: Stop and disable opendaylight_api service\n service: name=opendaylight state=stopped enabled=no\n when: [step|int == 2, opendaylight_enabled.rc == 0]\n - block:\n - {failed_when: false, name: Check if ODL container is present, register: opendaylight_api_container_present,\n shell: \'docker ps -a --format \'\'{{ \'\'{{\'\' }}.Names{{ \'\'}}\'\' }}\'\' | grep \'\'^opendaylight_api$\'\'\'}\n - {name: Update ODL container restart policy to unless-stopped, shell: docker\n update --restart=unless-stopped opendaylight_api, when: opendaylight_api_container_present.rc\n == 0}\n - docker_container: {name: opendaylight_api, state: stopped}\n name: stop previous ODL container\n when: step|int == 0\n - file: {path: \'/var/lib/opendaylight/{{item}}\', state: absent}\n name: remove data, journal and snapshots\n when: step|int == 0\n with_items: [snapshots, journal, data]\n - copy: {content: "<config xmlns=\\"urn:opendaylight:params:xml:ns:yang:mdsalutil\\"\\\n >\\n <upgradeInProgress>true</upgradeInProgress>\\n</config>\\n", dest: /var/lib/config-data/puppet-generated/opendaylight/opt/opendaylight/etc/opendaylight/datastore/initial/config/genius-mdsalutil-config.xml,\n group: 42462, mode: 420, owner: 42462}\n name: Set ODL upgrade flag to True\n when: step|int == 1\n name: ODL container L2 update and upgrade tasks\n - {ignore_errors: true, name: Check openvswitch version., register: ovs_version,\n shell: \'rpm -qa | awk -F- \'\'/^openvswitch-2/{print $2 "-" $3}\'\'\', when: step|int\n == 2}\n - {ignore_errors: true, name: Check openvswitch packaging., register: ovs_packaging_issue,\n shell: \'rpm -q --scripts openvswitch | awk \'\'/postuninstall/,/*/\'\' | grep -q\n "systemctl.*try-restart"\', when: step|int == 2}\n - block:\n - file: {path: /root/OVS_UPGRADE, state: absent}\n name: \'Ensure empty directory: emptying.\'\n - file: {group: root, mode: 488, owner: root, path: /root/OVS_UPGRADE, state: directory}\n name: \'Ensure empty directory: creating.\'\n - {command: yum makecache, name: Make yum cache.}\n - {command: yumdownloader --destdir /root/OVS_UPGRADE --resolve openvswitch,\n name: Download OVS packages.}\n - {name: Get rpm list for manual upgrade of OVS., register: ovs_list_of_rpms,\n shell: ls -1 /root/OVS_UPGRADE/*.rpm}\n - args: {chdir: /root/OVS_UPGRADE}\n name: Manual upgrade of OVS\n shell: \'rpm -U --test {{item}} 2>&1 | grep "already installed" || \\\n\n rpm -U --replacepkgs --notriggerun --nopostun {{item}};\n\n \'\n with_items: [\'{{ovs_list_of_rpms.stdout_lines}}\']\n when: [step|int == 2, \'\'\'2.5.0-14\'\' in ovs_version.stdout|default(\'\'\'\') or ovs_packaging_issue|default(false)|succeeded\']\n - {command: systemctl is-enabled openvswitch, ignore_errors: true, name: Check\n if openvswitch is deployed, register: openvswitch_enabled, tags: common}\n - command: systemctl is-active --quiet openvswitch\n name: \'PreUpgrade step0,validation: Check service openvswitch is running\'\n tags: validation\n when: [step|int == 0, openvswitch_enabled.rc == 0]\n - name: Stop openvswitch service\n service: name=openvswitch state=stopped\n when: [step|int == 1, openvswitch_enabled.rc == 0]\n - block:\n - iptables: chain=OUTPUT action=insert protocol=tcp destination_port={{ item\n }} jump=DROP\n name: Block connections to ODL.\n when: step|int == 0\n with_items: [6640, 6653, 6633]\n name: ODL container L2 update and upgrade tasks\n - {async: 30, name: Check pacemaker cluster running before upgrade, pacemaker_cluster: state=online\n check_and_fail=true, poll: 4, tags: validation, when: step|int == 0}\n - {name: Stop pacemaker cluster, pacemaker_cluster: state=offline, when: step|int\n == 2}\n - {name: Start pacemaker cluster, pacemaker_cluster: state=online, when: step|int\n == 4}\n - name: Get docker Rabbitmq image\n set_fact: {rabbitmq_docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest\'}\n - name: Check for Rabbitmq Kolla configuration\n register: rabbit_kolla_config\n stat: {path: /var/lib/config-data/puppet-generated/rabbitmq}\n - name: Check if Rabbitmq is already containerized\n set_fact: {rabbit_containerized: \'{{rabbit_kolla_config.stat.isdir | default(false)}}\'}\n - {command: hiera -c /etc/puppet/hiera.yaml bootstrap_nodeid, name: get bootstrap\n nodeid, register: bootstrap_node}\n - {name: set is_bootstrap_node fact, set_fact: \'is_bootstrap_node={{bootstrap_node.stdout|lower\n == ansible_hostname|lower}}\'}\n - block:\n - ignore_errors: true\n name: Check cluster resource status of rabbitmq\n pacemaker_resource: {check_mode: false, resource: rabbitmq, state: show}\n register: rabbitmq_res\n - block:\n - name: Disable the rabbitmq cluster resource.\n pacemaker_resource: {resource: rabbitmq, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - name: Delete the stopped rabbitmq cluster resource.\n pacemaker_resource: {resource: rabbitmq, state: delete, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n when: (is_bootstrap_node) and (rabbitmq_res|succeeded)\n - {name: Disable rabbitmq service, service: name=rabbitmq-server enabled=no}\n name: Rabbitmq baremetal to container upgrade tasks\n when: [step|int == 1, not rabbit_containerized|bool]\n - block:\n - {name: Get rabbitmq image id currently used by pacemaker, register: rabbitmq_current_pcmklatest_id,\n shell: \'docker images | awk \'\'/rabbitmq.* pcmklatest/{print $3}\'\' | uniq\'}\n - {name: Temporarily tag the current rabbitmq image id with the upgraded image\n name, shell: \'docker tag {{rabbitmq_current_pcmklatest_id.stdout}} {{rabbitmq_docker_image_latest}}\'}\n name: Prepare the switch to new rabbitmq container image name in pacemaker\n when: [step|int == 0, rabbit_containerized|bool]\n - ignore_errors: true\n name: Check rabbitmq-bundle cluster resource status\n pacemaker_resource: {check_mode: false, resource: rabbitmq-bundle, state: show}\n register: rabbit_pcs_res\n - block:\n - name: Disable the rabbitmq cluster resource before container upgrade\n pacemaker_resource: {resource: rabbitmq-bundle, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - block:\n - {command: \'cibadmin --query --xpath "//storage-mapping[@id=\'\'rabbitmq-log\'\']"\',\n ignore_errors: true, name: Check rabbitmq logging configuration in pacemaker,\n register: rabbitmq_logs_moved}\n - {command: pcs resource bundle update rabbitmq-bundle storage-map add id=rabbitmq-log\n source-dir=/var/log/containers/rabbitmq target-dir=/var/log/rabbitmq options=rw,\n name: Add a bind mount for logging in the rabbitmq bundle, when: rabbitmq_logs_moved.rc\n == 6}\n name: Move rabbitmq logging to /var/log/containers\n - {command: \'pcs resource bundle update rabbitmq-bundle container image={{rabbitmq_docker_image_latest}}\',\n name: Update the rabbitmq bundle to use the new container image name}\n - name: Enable the rabbitmq cluster resource\n pacemaker_resource: {resource: rabbitmq-bundle, state: enable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n name: Update rabbitmq-bundle pcs resource bundle for new container image\n when: [step|int == 1, rabbit_containerized|bool, is_bootstrap_node, rabbit_pcs_res|succeeded]\n - block:\n - name: Get docker Rabbitmq image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest\'}\n - {name: Get previous Rabbitmq image id, register: rabbitmq_image_id, shell: \'docker\n images | awk \'\'/rabbitmq.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Rabbitmq image, register: rabbitmq_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{rabbitmq_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Rabbitmq image, shell: \'docker\n rm -fv {{item}}\', with_items: \'{{ rabbitmq_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Rabbitmq images, shell: \'docker rmi -f {{rabbitmq_image_id.stdout}}\'}\n when: [rabbitmq_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Rabbitmq images}\n - {name: Retag pcmklatest to latest Rabbitmq image, shell: \'docker tag {{docker_image}}\n {{docker_image_latest}}\'}\n name: Retag the pacemaker image if containerized\n when: [step|int == 3, rabbit_containerized|bool]\n - name: Get docker redis image\n set_fact: {redis_docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest\'}\n - name: Check for redis Kolla configuration\n register: redis_kolla_config\n stat: {path: /var/lib/config-data/puppet-generated/redis}\n - name: Check if redis is already containerized\n set_fact: {redis_containerized: \'{{redis_kolla_config.stat.isdir | default(false)}}\'}\n - block:\n - ignore_errors: true\n name: Check cluster resource status of redis\n pacemaker_resource: {check_mode: false, resource: redis, state: show}\n register: redis_res\n - block:\n - name: Disable the redis cluster resource\n pacemaker_resource: {resource: redis, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - name: Delete the stopped redis cluster resource.\n pacemaker_resource: {resource: redis, state: delete, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n when: (is_bootstrap_node) and (redis_res|succeeded)\n - {name: Disable redis service, service: name=redis enabled=no}\n name: redis baremetal to container upgrade tasks\n when: [step|int == 1, not redis_containerized|bool]\n - block:\n - {name: Get redis image id currently used by pacemaker, register: redis_current_pcmklatest_id,\n shell: \'docker images | awk \'\'/redis.* pcmklatest/{print $3}\'\' | uniq\'}\n - {name: Temporarily tag the current redis image id with the upgraded image\n name, shell: \'docker tag {{redis_current_pcmklatest_id.stdout}} {{redis_docker_image_latest}}\'}\n name: Prepare the switch to new redis container image name in pacemaker\n when: [step|int == 0, redis_containerized|bool]\n - ignore_errors: true\n name: Check redis-bundle cluster resource status\n pacemaker_resource: {check_mode: false, resource: redis-bundle, state: show}\n register: redis_pcs_res\n - block:\n - name: Disable the redis cluster resource before container upgrade\n pacemaker_resource: {resource: redis-bundle, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - block:\n - {command: \'cibadmin --query --xpath "//storage-mapping[@id=\'\'redis-log\'\'\n and @source-dir=\'\'/var/log/containers/redis\'\']"\', ignore_errors: true,\n name: Check redis logging configuration in pacemaker, register: redis_logs_moved}\n - block:\n - {command: pcs resource bundle update redis-bundle storage-map remove redis-log,\n name: Remove old bind mount for logging in the redis bundle}\n - {command: pcs resource bundle update redis-bundle storage-map add id=redis-log\n source-dir=/var/log/containers/redis target-dir=/var/log/redis options=rw,\n name: Add a bind mount for logging in the redis bundle}\n name: Change redis logging configuration in pacemaker\n when: redis_logs_moved.rc == 6\n name: Move redis logging to /var/log/containers\n - {command: \'pcs resource bundle update redis-bundle container image={{redis_docker_image_latest}}\',\n name: Update the redis bundle to use the new container image name}\n - name: Enable the redis cluster resource\n pacemaker_resource: {resource: redis-bundle, state: enable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n name: Update redis-bundle pcs resource bundle for new container image\n when: [step|int == 1, redis_containerized|bool, is_bootstrap_node, redis_pcs_res|succeeded]\n - block:\n - name: Get docker Redis image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-redis:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest\'}\n - {name: Get previous Redis image id, register: redis_image_id, shell: \'docker\n images | awk \'\'/redis.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Redis image, register: redis_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{redis_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Redis image, shell: \'docker rm\n -fv {{item}}\', with_items: \'{{ redis_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Redis images, shell: \'docker rmi -f {{redis_image_id.stdout}}\'}\n when: [redis_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Redis images}\n - {name: Retag pcmklatest to latest Redis image, shell: \'docker tag {{docker_image}}\n {{docker_image_latest}}\'}\n name: Retag the pacemaker image if containerized\n when: [step|int == 3, redis_containerized|bool]\n - {name: Stop snmp service, service: name=snmpd state=stopped, when: step|int\n == 1}\n - command: systemctl is-enabled --quiet "{{ item }}"\n ignore_errors: true\n name: Check if swift-proxy or swift-object-expirer are deployed\n register: swift_proxy_services_enabled\n tags: common\n with_items: [openstack-swift-proxy, openstack-swift-object-expirer]\n - command: systemctl is-active --quiet "{{ item.item }}"\n name: \'PreUpgrade step0,validation: Check service openstack-swift-proxy and\n openstack-swift-object-expirer are running\'\n tags: validation\n when: [step|int == 0, item.rc == 0]\n with_items: \'{{ swift_proxy_services_enabled.results }}\'\n - name: Stop and disable swift-proxy and swift-object-expirer services\n service: name={{ item.item }} state=stopped enabled=no\n when: [step|int == 2, item.rc == 0]\n with_items: \'{{ swift_proxy_services_enabled.results }}\'\n - name: Set fact for removal of openstack-swift-proxy package\n set_fact: {remove_swift_proxy_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-swift-proxy package if operator requests it\n when: [step|int == 2, remove_swift_proxy_package|bool]\n yum: name=openstack-swift-proxy state=removed\n - command: systemctl is-enabled --quiet "{{ item }}"\n ignore_errors: true\n name: Check if swift storage services are deployed\n register: swift_services_enabled\n tags: common\n with_items: [openstack-swift-account-auditor, openstack-swift-account-reaper,\n openstack-swift-account-replicator, openstack-swift-account, openstack-swift-container-auditor,\n openstack-swift-container-replicator, openstack-swift-container-updater, openstack-swift-container,\n openstack-swift-object-auditor, openstack-swift-object-replicator, openstack-swift-object-updater,\n openstack-swift-object]\n - command: systemctl is-active --quiet "{{ item.item }}"\n name: \'PreUpgrade step0,validation: Check swift storage services are running\'\n tags: validation\n when: [step|int == 0, item.rc == 0]\n with_items: \'{{ swift_services_enabled.results }}\'\n - name: Stop and disable swift storage services\n service: name={{ item.item }} state=stopped enabled=no\n when: [step|int == 2, item.rc == 0]\n with_items: \'{{ swift_services_enabled.results }}\'\n - name: Set fact for removal of openstack-swift-container,object,account package\n set_fact: {remove_swift_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-swift-container,object,account packages if operator requests\n it\n when: [step|int == 2, remove_swift_package|bool]\n with_items: [openstack-swift-container, openstack-swift-object, openstack-swift-account]\n yum: name={{ item }} state=removed\n - {file: state=absent path=/etc/xinetd.d/rsync, name: Remove rsync service from\n xinetd, register: rsync_service_removed, when: step|int == 2}\n - name: Restart xinetd service after rsync removal\n service: name=xinetd state=restarted\n when: [step|int == 2, rsync_service_removed|changed]\n - args: {creates: /etc/sysconfig/ip6tables.n-o-upgrade}\n name: blank ipv6 rule before activating ipv6 firewall.\n shell: cat /etc/sysconfig/ip6tables > /etc/sysconfig/ip6tables.n-o-upgrade;\n cat</dev/null>/etc/sysconfig/ip6tables\n when: step|int == 3\n - {name: Check yum for rpm-python present, register: rpm_python_check, when: step|int\n == 0, yum: name=rpm-python state=present}\n - fail: msg="rpm-python package was not present before this run! Check environment\n before re-running"\n name: Fail when rpm-python wasn\'t present\n when: [step|int == 0, rpm_python_check.changed != false]\n - {name: Check for os-net-config upgrade, register: os_net_config_need_upgrade,\n shell: \'yum check-upgrade | awk \'\'/os-net-config/{print}\'\'\', when: step|int\n == 3}\n - {ignore_errors: true, name: Check that os-net-config has configuration, register: os_net_config_has_config,\n shell: test -s /etc/os-net-config/config.json, when: step|int == 3}\n - block:\n - {name: Upgrade os-net-config, yum: name=os-net-config state=latest}\n - {changed_when: os_net_config_upgrade.rc == 2, command: os-net-config --no-activate\n -c /etc/os-net-config/config.json -v --detailed-exit-codes, failed_when: \'os_net_config_upgrade.rc\n not in [0,2]\', name: take new os-net-config parameters into account now,\n register: os_net_config_upgrade}\n when: [step|int == 3, os_net_config_need_upgrade.stdout, os_net_config_has_config.rc\n == 0]\n - {name: Update all packages, when: step|int == 3, yum: name=* state=latest}\n role_data_workflow_tasks: {}\n role_name: Controller\ncompute-0:\n hosts:\n 192.168.24.17: {}\n vars:\n ctlplane_ip: 192.168.24.17\n deploy_server_id: ec01cdea-81e5-4680-8df8-788f4f3d3d28\n enabled_networks: [management, storage, ctlplane, external, internal_api, storage_mgmt,\n tenant]\n external_ip: 192.168.24.17\n internal_api_ip: 172.17.1.22\n management_ip: 192.168.24.17\n storage_ip: 172.17.3.16\n storage_mgmt_ip: 192.168.24.17\n tenant_ip: 172.17.2.11\ncompute-1:\n hosts:\n 192.168.24.12: {}\n vars:\n ctlplane_ip: 192.168.24.12\n deploy_server_id: 61d7f438-c2d0-495e-bf7a-56900b927446\n enabled_networks: [management, storage, ctlplane, external, internal_api, storage_mgmt,\n tenant]\n external_ip: 192.168.24.12\n internal_api_ip: 172.17.1.21\n management_ip: 192.168.24.12\n storage_ip: 172.17.3.14\n storage_mgmt_ip: 192.168.24.12\n tenant_ip: 172.17.2.17\nCompute:\n children:\n compute-0: {}\n compute-1: {}\n vars:\n ansible_ssh_user: heat-admin\n bootstrap_server_id: 0d25b3fa-5154-47be-9ced-05bdd8d3ca43\n role_data_cellv2_discovery: true\n role_data_config_settings: {}\n role_data_deploy_steps_tasks: []\n role_data_docker_config:\n step_3:\n iscsid:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-iscsid:2018-07-13.1\n net: host\n privileged: true\n restart: always\n start_order: 2\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/dev/:/dev/\', \'/run/:/run/\', \'/sys:/sys\', \'/lib/modules:/lib/modules:ro\',\n \'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro\']\n nova_libvirt:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-nova-libvirt:2018-07-13.1\n net: host\n pid: host\n privileged: true\n restart: always\n start_order: 1\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro\',\n \'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro\', \'/lib/modules:/lib/modules:ro\',\n \'/dev:/dev\', \'/run:/run\', \'/sys/fs/cgroup:/sys/fs/cgroup\', \'/var/lib/nova:/var/lib/nova:shared\',\n \'/etc/libvirt:/etc/libvirt\', \'/var/run/libvirt:/var/run/libvirt\', \'/var/lib/libvirt:/var/lib/libvirt\',\n \'/var/log/containers/libvirt:/var/log/libvirt\', \'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro\',\n \'/var/lib/vhost_sockets:/var/lib/vhost_sockets\', \'/sys/fs/selinux:/sys/fs/selinux\']\n nova_virtlogd:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-nova-libvirt:2018-07-13.1\n net: host\n pid: host\n privileged: true\n restart: always\n start_order: 0\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro\',\n \'/lib/modules:/lib/modules:ro\', \'/dev:/dev\', \'/run:/run\', \'/sys/fs/cgroup:/sys/fs/cgroup\',\n \'/var/lib/nova:/var/lib/nova:shared\', \'/var/run/libvirt:/var/run/libvirt\',\n \'/var/lib/libvirt:/var/lib/libvirt\', \'/etc/libvirt/qemu:/etc/libvirt/qemu:ro\',\n \'/var/log/libvirt/qemu:/var/log/libvirt/qemu\']\n step_4:\n ceilometer_agent_compute:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-ceilometer-compute:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro\',\n \'/var/run/libvirt:/var/run/libvirt:ro\', \'/var/log/containers/ceilometer:/var/log/ceilometer\']\n logrotate_crond:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-cron:2018-07-13.1\n net: none\n pid: host\n privileged: true\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers:/var/log/containers\']\n nova_compute:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-nova-compute:2018-07-13.1\n ipc: host\n net: host\n privileged: true\n restart: always\n ulimit: [nofile=1024]\n user: nova\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro\',\n \'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro\', \'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro\',\n \'/dev:/dev\', \'/lib/modules:/lib/modules:ro\', \'/run:/run\', \'/var/lib/nova:/var/lib/nova:shared\',\n \'/var/lib/libvirt:/var/lib/libvirt\', \'/sys/class/net:/sys/class/net\',\n \'/sys/bus/pci:/sys/bus/pci\']\n nova_migration_target:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-nova-compute:2018-07-13.1\n net: host\n privileged: true\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro\',\n \'/etc/ssh/:/host-ssh/:ro\', \'/run:/run\', \'/var/lib/nova:/var/lib/nova:shared\']\n role_data_docker_config_scripts: {}\n role_data_docker_puppet_tasks: {}\n role_data_external_deploy_tasks: []\n role_data_external_post_deploy_tasks: []\n role_data_fast_forward_post_upgrade_tasks:\n - name: Register repo type and args\n set_fact:\n fast_forward_repo_args:\n tripleo_repos: {ocata: -b ocata current, pike: -b pike current, queens: -b\n queens current}\n fast_forward_repo_type: custom-script\n - debug: {msg: \'fast_forward_repo_type: {{ fast_forward_repo_type }} fast_forward_repo_args:\n {{ fast_forward_repo_args }}\'}\n - block:\n - git: {dest: /home/stack/tripleo-repos/, repo: \'https://github.com/openstack/tripleo-repos.git\'}\n name: clone tripleo-repos\n - args: {chdir: /home/stack/tripleo-repos/}\n command: python setup.py install\n name: install tripleo-repos\n - {command: \'tripleo-repos {{ fast_forward_repo_args.tripleo_repos[release]\n }}\', name: Enable tripleo-repos}\n when: [ffu_packages_apply|bool, fast_forward_repo_type == \'tripleo-repos\']\n - block:\n - copy: {content: "#!/bin/bash\\nset -e\\necho \\"If you use FastForwardRepoType\\\n \\ \'custom-script\' you have to provide the upgrade repo script content.\\"\\\n \\necho \\"It will be installed as /root/ffu_upgrade_repo.sh on the node\\"\\\n \\necho \\"and passed the upstream name (ocata, pike, queens) of the release\\\n \\ as first argument\\"\\ncase $1 in\\n ocata)\\n subscription-manager\\\n \\ repos --disable=rhel-7-server-openstack-10-rpms\\n subscription-manager\\\n \\ repos --enable=rhel-7-server-openstack-11-rpms\\n ;;\\n pike)\\n \\\n \\ subscription-manager repos --disable=rhel-7-server-openstack-11-rpms\\n\\\n \\ subscription-manager repos --enable=rhel-7-server-openstack-12-rpms\\n\\\n \\ ;;\\n queens)\\n subscription-manager repos --disable=rhel-7-server-openstack-12-rpms\\n\\\n \\ subscription-manager repos --enable=rhel-7-server-openstack-13-rpms\\n\\\n \\ ;;\\n *)\\n echo \\"unknown release $1\\" >&2\\n exit 1\\nesac\\n",\n dest: /root/ffu_update_repo.sh, mode: 448}\n name: Create custom Script for upgrading repo.\n - {name: Execute custom script for upgrading repo., shell: \'/root/ffu_update_repo.sh\n {{release}}\'}\n when: [ffu_packages_apply|bool, fast_forward_repo_type == \'custom-script\']\n role_data_fast_forward_upgrade_tasks:\n - command: systemctl is-enabled openstack-ceilometer-compute\n ignore_errors: true\n name: FFU check if openstack-ceilometer-compute is deployed\n register: ceilometer_agent_compute_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact ceilometer_agent_compute_enabled\n set_fact: {ceilometer_agent_compute_enabled: \'{{ ceilometer_agent_compute_enabled_result.rc\n == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-ceilometer-compute service\n service: name=openstack-ceilometer-compute state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', ceilometer_agent_compute_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-nova-compute\n ignore_errors: true\n name: Check if nova-compute is deployed\n register: nova_compute_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact nova_compute_enabled\n set_fact: {nova_compute_enabled: \'{{ nova_compute_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable nova-compute service\n service: name=openstack-nova-compute state=stopped\n when: [step|int == 1, nova_compute_enabled|bool, release == \'ocata\']\n - name: Register repo type and args\n set_fact:\n fast_forward_repo_args:\n tripleo_repos: {ocata: -b ocata current, pike: -b pike current, queens: -b\n queens current}\n fast_forward_repo_type: custom-script\n when: step|int == 3\n - debug: {msg: \'fast_forward_repo_type: {{ fast_forward_repo_type }} fast_forward_repo_args:\n {{ fast_forward_repo_args }}\'}\n when: step|int == 3\n - block:\n - git: {dest: /home/stack/tripleo-repos/, repo: \'https://github.com/openstack/tripleo-repos.git\'}\n name: clone tripleo-repos\n - args: {chdir: /home/stack/tripleo-repos/}\n command: python setup.py install\n name: install tripleo-repos\n - {command: \'tripleo-repos {{ fast_forward_repo_args.tripleo_repos[release]\n }}\', name: Enable tripleo-repos}\n when: [step|int == 3, ffu_packages_apply|bool, fast_forward_repo_type == \'tripleo-repos\']\n - block:\n - copy: {content: "#!/bin/bash\\nset -e\\necho \\"If you use FastForwardRepoType\\\n \\ \'custom-script\' you have to provide the upgrade repo script content.\\"\\\n \\necho \\"It will be installed as /root/ffu_upgrade_repo.sh on the node\\"\\\n \\necho \\"and passed the upstream name (ocata, pike, queens) of the release\\\n \\ as first argument\\"\\ncase $1 in\\n ocata)\\n subscription-manager\\\n \\ repos --disable=rhel-7-server-openstack-10-rpms\\n subscription-manager\\\n \\ repos --enable=rhel-7-server-openstack-11-rpms\\n ;;\\n pike)\\n \\\n \\ subscription-manager repos --disable=rhel-7-server-openstack-11-rpms\\n\\\n \\ subscription-manager repos --enable=rhel-7-server-openstack-12-rpms\\n\\\n \\ ;;\\n queens)\\n subscription-manager repos --disable=rhel-7-server-openstack-12-rpms\\n\\\n \\ subscription-manager repos --enable=rhel-7-server-openstack-13-rpms\\n\\\n \\ ;;\\n *)\\n echo \\"unknown release $1\\" >&2\\n exit 1\\nesac\\n",\n dest: /root/ffu_update_repo.sh, mode: 448}\n name: Create custom Script for upgrading repo.\n - {name: Execute custom script for upgrading repo., shell: \'/root/ffu_update_repo.sh\n {{release}}\'}\n when: [step|int == 3, ffu_packages_apply|bool, fast_forward_repo_type == \'custom-script\']\n role_data_global_config_settings: {}\n role_data_host_prep_tasks:\n - file: {path: /var/log/containers/ceilometer, state: directory}\n name: create persistent logs directory\n - copy: {content: \'Log files from ceilometer containers can be found under\n\n /var/log/containers/ceilometer.\n\n \', dest: /var/log/ceilometer/readme.txt}\n ignore_errors: true\n name: ceilometer logs readme\n - {name: stat /lib/systemd/system/iscsid.socket, register: stat_iscsid_socket,\n stat: path=/lib/systemd/system/iscsid.socket}\n - {name: Stop and disable iscsid.socket service, service: name=iscsid.socket state=stopped\n enabled=no, when: stat_iscsid_socket.stat.exists}\n - file: {path: /var/log/containers/nova, state: directory}\n name: create persistent logs directory\n - copy: {content: \'Log files from nova containers can be found under\n\n /var/log/containers/nova and /var/log/containers/httpd/nova-*.\n\n \', dest: /var/log/nova/readme.txt}\n ignore_errors: true\n name: nova logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/var/lib/nova, /var/lib/libvirt]\n - file: {path: /etc/ceph, state: directory}\n name: ensure ceph configurations exist\n - name: is Instance HA enabled\n set_fact: {instance_ha_enabled: false}\n - block:\n - file: {path: /var/lib/nova/instanceha, state: directory}\n name: prepare Instance HA script directory\n - copy: {content: "#!/bin/python -utt\\n\\nimport os\\nimport sys\\nimport time\\n\\\n import inspect\\nimport logging\\nimport argparse\\nimport oslo_config.cfg\\n\\\n import requests.exceptions\\n\\ndef is_forced_down(connection, hostname):\\n\\\n \\ services = connection.services.list(host=hostname, binary=\\"nova-compute\\"\\\n )\\n for service in services:\\n if service.forced_down:\\n \\\n \\ return True\\n return False\\n\\ndef evacuations_done(connection,\\\n \\ hostname):\\n # Get a list of migrations.\\n # :param host: (optional)\\\n \\ filter migrations by host name.\\n # :param status: (optional) filter\\\n \\ migrations by status.\\n # :param cell_name: (optional) filter migrations\\\n \\ for a cell.\\n #\\n migrations = connection.migrations.list(host=hostname)\\n\\\n \\n print(\\"Checking %d migrations\\" % len(migrations))\\n for migration\\\n \\ in migrations:\\n # print migration.to_dict()\\n #\\n \\\n \\ # {\\n # u\'status\': u\'error\',\\n # u\'dest_host\': None,\\n\\\n \\ # u\'new_instance_type_id\': 2,\\n # u\'old_instance_type_id\':\\\n \\ 2,\\n # u\'updated_at\': u\'2018-04-22T20:55:29.000000\',\\n \\\n \\ # u\'dest_compute\':\\n # u\'overcloud-novacompute-2.localdomain\',\\n\\\n \\ # u\'migration_type\': u\'live-migration\',\\n # u\'source_node\':\\n\\\n \\ # u\'overcloud-novacompute-0.localdomain\',\\n # u\'id\':\\\n \\ 8,\\n # u\'created_at\': u\'2018-04-22T20:52:58.000000\',\\n \\\n \\ # u\'instance_uuid\':\\n # u\'d1c82ce8-3dc5-48db-b59f-854b3b984ef1\',\\n\\\n \\ # u\'dest_node\':\\n # u\'overcloud-novacompute-2.localdomain\',\\n\\\n \\ # u\'source_compute\':\\n # u\'overcloud-novacompute-0.localdomain\'\\n\\\n \\ # }\\n # Acceptable: done, completed, failed\\n if\\\n \\ migration.status in [\\"running\\", \\"accepted\\", \\"pre-migrating\\"]:\\n\\\n \\ return False\\n return True\\n\\ndef safe_to_start(connection,\\\n \\ hostname):\\n if is_forced_down(connection, hostname):\\n print(\\"\\\n Waiting for fence-down flag to be cleared\\")\\n return False\\n \\\n \\ if not evacuations_done(connection, hostname):\\n print(\\"Waiting\\\n \\ for evacuations to complete or fail\\")\\n return False\\n return\\\n \\ True\\n\\ndef create_nova_connection(options):\\n try:\\n from\\\n \\ novaclient import client\\n from novaclient.exceptions import\\\n \\ NotAcceptable\\n except ImportError:\\n print(\\"Nova not found\\\n \\ or not accessible\\")\\n sys.exit(1)\\n\\n from keystoneauth1\\\n \\ import loading\\n from keystoneauth1 import session\\n from keystoneclient\\\n \\ import discover\\n\\n # Prefer the oldest and strip the leading \'v\'\\n\\\n \\ keystone_versions = discover.available_versions(options[\\"auth_url\\"\\\n ][0])\\n keystone_version = keystone_versions[0][\'id\'][1:]\\n kwargs\\\n \\ = dict(\\n auth_url=options[\\"auth_url\\"][0],\\n username=options[\\"\\\n username\\"][0],\\n password=options[\\"password\\"][0]\\n )\\n\\\n \\n if discover.version_match(\\"2\\", keystone_version):\\n kwargs[\\"\\\n tenant_name\\"] = options[\\"tenant_name\\"][0]\\n\\n elif discover.version_match(\\"\\\n 3\\", keystone_version):\\n kwargs[\\"project_name\\"] = options[\\"\\\n project_name\\"][0]\\n kwargs[\\"user_domain_name\\"] = options[\\"\\\n user_domain_name\\"][0]\\n kwargs[\\"project_domain_name\\"] = options[\\"\\\n project_domain_name\\"][0]\\n\\n loader = loading.get_plugin_loader(\'password\')\\n\\\n \\ keystone_auth = loader.load_from_options(**kwargs)\\n keystone_session\\\n \\ = session.Session(auth=keystone_auth, verify=(not options[\\"insecure\\"\\\n ]))\\n\\n nova_versions = [ \\"2.23\\", \\"2\\" ]\\n for version in nova_versions:\\n\\\n \\ clientargs = inspect.getargspec(client.Client).varargs\\n \\\n \\ # Some versions of Openstack prior to Ocata only\\n # supported\\\n \\ positional arguments for username,\\n # password, and tenant.\\n\\\n \\ #\\n # Versions since Ocata only support named arguments.\\n\\\n \\ #\\n # So we need to use introspection to figure out how\\\n \\ to\\n # create a Nova client.\\n #\\n # Happy days\\n\\\n \\ #\\n if clientargs:\\n # OSP < Ocata\\n \\\n \\ # ArgSpec(args=[\'version\', \'username\', \'password\', \'project_id\',\\\n \\ \'auth_url\'],\\n # varargs=None,\\n # \\\n \\ keywords=\'kwargs\', defaults=(None, None, None, None))\\n \\\n \\ nova = client.Client(version,\\n \\\n \\ None, # User\\n None, # Password\\n \\\n \\ None, # Tenant\\n \\\n \\ None, # Auth URL\\n insecure=options[\\"\\\n insecure\\"],\\n region_name=options[\\"\\\n os_region_name\\"][0],\\n session=keystone_session,\\\n \\ auth=keystone_auth,\\n http_log_debug=options.has_key(\\"\\\n verbose\\"))\\n else:\\n # OSP >= Ocata\\n #\\\n \\ ArgSpec(args=[\'version\'], varargs=\'args\', keywords=\'kwargs\', defaults=None)\\n\\\n \\ nova = client.Client(version,\\n \\\n \\ region_name=options[\\"os_region_name\\"][0],\\n \\\n \\ session=keystone_session, auth=keystone_auth,\\n \\\n \\ http_log_debug=options.has_key(\\"verbose\\"\\\n ))\\n\\n try:\\n nova.hypervisors.list()\\n return\\\n \\ nova\\n\\n except NotAcceptable as e:\\n logging.warning(e)\\n\\\n \\n except Exception as e:\\n logging.warning(\\"Nova connection\\\n \\ failed. %s: %s\\" % (e.__class__.__name__, e))\\n\\n print(\\"Couldn\'t\\\n \\ obtain a supported connection to nova, tried: %s\\\\n\\" % repr(nova_versions))\\n\\\n \\ return None\\n\\n\\nparser = argparse.ArgumentParser(description=\'Process\\\n \\ some integers.\')\\nparser.add_argument(\'--config-file\', dest=\'nova_config\',\\\n \\ action=\'store\',\\n default=\\"/etc/nova/nova.conf\\"\\\n ,\\n help=\'path to nova configuration (default: /etc/nova/nova.conf)\')\\n\\\n parser.add_argument(\'--nova-binary\', dest=\'nova_binary\', action=\'store\',\\n\\\n \\ default=\\"/usr/bin/nova-compute\\",\\n \\\n \\ help=\'path to nova compute binary (default: /usr/bin/nova-compute)\')\\n\\\n parser.add_argument(\'--enable-file\', dest=\'enable_file\', action=\'store\',\\n\\\n \\ default=\\"/var/lib/nova/instanceha/enabled\\",\\n \\\n \\ help=\'file exists if instance HA is enabled on this\\\n \\ host \'\\\\\\n \'(default: /var/lib/nova/instanceha/enabled)\')\\n\\\n \\n\\nsections = {}\\n(args, remaining) = parser.parse_known_args(sys.argv)\\n\\\n \\nconfig = oslo_config.cfg.ConfigParser(args.nova_config, sections)\\n\\\n config.parse()\\nconfig.sections[\\"placement\\"][\\"insecure\\"] = 0\\nconfig.sections[\\"\\\n placement\\"][\\"verbose\\"] = 1\\n\\nif os.path.isfile(args.enable_file):\\n\\\n \\ connection = None\\n while not connection:\\n # Loop in case\\\n \\ the control plane is recovering when we run\\n connection = create_nova_connection(config.sections[\\"\\\n placement\\"])\\n if not connection:\\n time.sleep(10)\\n\\\n \\n while not safe_to_start(connection, config.sections[\\"DEFAULT\\"\\\n ][\\"host\\"][0]):\\n time.sleep(10)\\n\\nreal_args = [args.nova_binary,\\\n \\ \'--config-file\', args.nova_config]\\nreal_args.extend(remaining[1:])\\n\\\n os.execv(args.nova_binary, real_args)\\n", dest: /var/lib/nova/instanceha/check-run-nova-compute,\n mode: 493}\n name: install Instance HA script that runs nova-compute\n - {command: hiera -c /etc/puppet/hiera.yaml compute_instanceha_short_node_names,\n name: Get list of instance HA compute nodes, register: iha_nodes}\n - {file: path=/var/lib/nova/instanceha/enabled state=touch, name: If instance\n HA is enabled on the node activate the evacuation completed check, when: iha_nodes.stdout|lower\n | search(\'"\'+ansible_hostname|lower+\'"\')}\n name: install Instance HA recovery script\n when: instance_ha_enabled|bool\n - file: {path: \'{{ item }}\', state: directory}\n name: create libvirt persistent data directories\n with_items: [/etc/libvirt, /etc/libvirt/secrets, /etc/libvirt/qemu, /var/lib/libvirt,\n /var/log/containers/libvirt]\n - group: {gid: 107, name: qemu, state: present}\n name: ensure qemu group is present on the host\n - name: ensure qemu user is present on the host\n user: {comment: qemu user, group: qemu, name: qemu, shell: /sbin/nologin, state: present,\n uid: 107}\n - file: {group: qemu, owner: qemu, path: /var/lib/vhost_sockets, setype: virt_cache_t,\n seuser: system_u, state: directory}\n name: create directory for vhost-user sockets with qemu ownership\n - {command: /usr/bin/rpm -q libvirt-daemon, failed_when: false, name: check if\n libvirt is installed, register: libvirt_installed}\n - name: make sure libvirt services are disabled\n service: {enabled: false, name: \'{{ item }}\', state: stopped}\n when: libvirt_installed.rc == 0\n with_items: [libvirtd.service, virtlogd.socket]\n role_data_kolla_config:\n /var/lib/kolla/config_files/ceilometer_agent_compute.json:\n command: /usr/bin/ceilometer-polling --polling-namespaces compute --logfile\n /var/log/ceilometer/compute.log\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/iscsid.json:\n command: /usr/sbin/iscsid -f\n config_files:\n - {dest: /etc/iscsi/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-iscsid/*}\n /var/lib/kolla/config_files/logrotate-crond.json:\n command: /usr/sbin/crond -s -n\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/nova-migration-target.json:\n command: /usr/sbin/sshd -D -p 2022\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /etc/ssh/, owner: root, perm: \'0600\', source: /host-ssh/ssh_host_*_key}\n /var/lib/kolla/config_files/nova_compute.json:\n command: \'/usr/bin/nova-compute \'\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /etc/iscsi/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-iscsid/*}\n - {dest: /etc/ceph/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-ceph/}\n permissions:\n - {owner: \'nova:nova\', path: /var/log/nova, recurse: true}\n - {owner: \'nova:nova\', path: /var/lib/nova, recurse: true}\n - {owner: \'nova:nova\', path: /etc/ceph/ceph.client.openstack.keyring, perm: \'0600\'}\n /var/lib/kolla/config_files/nova_libvirt.json:\n command: /usr/sbin/libvirtd\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /etc/ceph/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-ceph/}\n permissions:\n - {owner: \'nova:nova\', path: /etc/ceph/ceph.client.openstack.keyring, perm: \'0600\'}\n /var/lib/kolla/config_files/nova_virtlogd.json:\n command: /usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n role_data_logging_groups: [root]\n role_data_logging_sources: []\n role_data_merged_config_settings:\n ceilometer::agent::auth::auth_endpoint_type: internalURL\n ceilometer::agent::auth::auth_password: ZUMGXYGsUAsWVRjeZaJfeAv9y\n ceilometer::agent::auth::auth_project_domain_name: Default\n ceilometer::agent::auth::auth_region: regionOne\n ceilometer::agent::auth::auth_tenant_name: service\n ceilometer::agent::auth::auth_url: http://172.17.1.10:5000\n ceilometer::agent::auth::auth_user_domain_name: Default\n ceilometer::agent::compute::instance_discovery_method: libvirt_metadata\n ceilometer::agent::notification::event_pipeline_publishers: [\'gnocchi://\', \'panko://\']\n ceilometer::agent::notification::manage_event_pipeline: true\n ceilometer::agent::notification::manage_pipeline: false\n ceilometer::agent::notification::pipeline_publishers: [\'gnocchi://\']\n ceilometer::agent::polling::manage_polling: false\n ceilometer::debug: true\n ceilometer::dispatcher::gnocchi::archive_policy: low\n ceilometer::dispatcher::gnocchi::filter_project: service\n ceilometer::dispatcher::gnocchi::resources_definition_file: gnocchi_resources.yaml\n ceilometer::dispatcher::gnocchi::url: http://172.17.1.10:8041\n ceilometer::host: \'%{::fqdn}\'\n ceilometer::keystone::authtoken::auth_uri: http://172.17.1.10:5000\n ceilometer::keystone::authtoken::auth_url: http://172.17.1.10:5000\n ceilometer::keystone::authtoken::password: ZUMGXYGsUAsWVRjeZaJfeAv9y\n ceilometer::keystone::authtoken::project_domain_name: Default\n ceilometer::keystone::authtoken::project_name: service\n ceilometer::keystone::authtoken::user_domain_name: Default\n ceilometer::notification_driver: messagingv2\n ceilometer::rabbit_heartbeat_timeout_threshold: 60\n ceilometer::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n ceilometer::rabbit_port: 5672\n ceilometer::rabbit_use_ssl: \'False\'\n ceilometer::rabbit_userid: guest\n ceilometer::snmpd_readonly_user_password: e0e6f3b1f8575fd51ee080d6b2724feef235ed7e\n ceilometer::snmpd_readonly_username: ro_snmp_user\n ceilometer::telemetry_secret: ey9QkWYUbQMUv7hUXn2xzTrvM\n ceilometer_redis_password: jv8TQJ7wGC7M7e6ez2GNPfke7\n cold_migration_ssh_inbound_addr: internal_api\n compute_namespace: true\n kernel_modules:\n nf_conntrack: {}\n nf_conntrack_proto_sctp: {}\n live_migration_ssh_inbound_addr: internal_api\n neutron::agents::ml2::ovs::local_ip: tenant\n neutron::plugins::ovs::opendaylight::allowed_network_types: [local, flat, vlan,\n vxlan, gre]\n neutron::plugins::ovs::opendaylight::enable_dpdk: false\n neutron::plugins::ovs::opendaylight::enable_hw_offload: false\n neutron::plugins::ovs::opendaylight::odl_password: redhat\n neutron::plugins::ovs::opendaylight::odl_username: odladmin\n neutron::plugins::ovs::opendaylight::provider_mappings: [\'datacentre:br-ex\']\n neutron::plugins::ovs::opendaylight::vhostuser_mode: server\n neutron::plugins::ovs::opendaylight::vhostuser_socket_dir: /var/lib/vhost_sockets\n nova::api_database_connection: mysql+pymysql://nova_api:6BkAm2KjhQcBQjbmFfcxWwJUq@172.17.1.10/nova_api?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n nova::cell0_database_connection: mysql+pymysql://nova:6BkAm2KjhQcBQjbmFfcxWwJUq@172.17.1.10/nova_cell0?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n nova::cinder_catalog_info: volumev3:cinderv3:internalURL\n nova::compute::consecutive_build_service_disable_threshold: \'0\'\n nova::compute::instance_usage_audit: true\n nova::compute::instance_usage_audit_period: hour\n nova::compute::libvirt::libvirt_enabled_perf_events: []\n nova::compute::libvirt::libvirt_virt_type: kvm\n nova::compute::libvirt::manage_libvirt_services: false\n nova::compute::libvirt::migration_support: false\n nova::compute::libvirt::qemu::configure_qemu: true\n nova::compute::libvirt::qemu::group: qemu\n nova::compute::libvirt::qemu::max_files: 32768\n nova::compute::libvirt::qemu::max_processes: 131072\n nova::compute::libvirt::services::libvirt_virt_type: kvm\n nova::compute::libvirt::vncserver_listen: internal_api\n nova::compute::neutron::libvirt_vif_driver: \'\'\n nova::compute::pci::passthrough: \'\'\n nova::compute::rbd::ephemeral_storage: false\n nova::compute::rbd::libvirt_images_rbd_ceph_conf: /etc/ceph/ceph.conf\n nova::compute::rbd::libvirt_images_rbd_pool: vms\n nova::compute::rbd::libvirt_rbd_secret_key: AQAvSFhbAAAAABAAp+EMtuy9P+WQwvxTR4GS1A==\n nova::compute::rbd::libvirt_rbd_secret_uuid: 563e8cce-8ff0-11e8-adc7-525400eecd02\n nova::compute::rbd::libvirt_rbd_user: openstack\n nova::compute::rbd::rbd_keyring: client.openstack\n nova::compute::reserved_host_memory: 4096\n nova::compute::vcpu_pin_set: []\n nova::compute::verify_glance_signatures: false\n nova::compute::vncproxy_host: 10.0.0.106\n nova::compute::vncserver_proxyclient_address: internal_api\n nova::cron::archive_deleted_rows::destination: /var/log/nova/nova-rowsflush.log\n nova::cron::archive_deleted_rows::hour: \'0\'\n nova::cron::archive_deleted_rows::max_rows: \'100\'\n nova::cron::archive_deleted_rows::minute: \'1\'\n nova::cron::archive_deleted_rows::month: \'*\'\n nova::cron::archive_deleted_rows::monthday: \'*\'\n nova::cron::archive_deleted_rows::until_complete: false\n nova::cron::archive_deleted_rows::user: nova\n nova::cron::archive_deleted_rows::weekday: \'*\'\n nova::database_connection: mysql+pymysql://nova:6BkAm2KjhQcBQjbmFfcxWwJUq@172.17.1.10/nova?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n nova::db::database_db_max_retries: -1\n nova::db::database_max_retries: -1\n nova::db::sync::db_sync_timeout: 300\n nova::db::sync_api::db_sync_timeout: 300\n nova::debug: true\n nova::glance_api_servers: http://172.17.1.10:9292\n nova::host: \'%{::fqdn}\'\n nova::migration::live_migration_tunnelled: false\n nova::my_ip: internal_api\n nova::network::neutron::dhcp_domain: \'\'\n nova::network::neutron::neutron_auth_type: v3password\n nova::network::neutron::neutron_auth_url: http://192.168.24.10:35357/v3\n nova::network::neutron::neutron_ovs_bridge: br-int\n nova::network::neutron::neutron_password: anbEgsRDNBffKrcVkyZd2wPYr\n nova::network::neutron::neutron_project_name: service\n nova::network::neutron::neutron_region_name: regionOne\n nova::network::neutron::neutron_url: http://172.17.1.10:9696\n nova::network::neutron::neutron_username: neutron\n nova::notification_driver: messagingv2\n nova::notification_format: unversioned\n nova::notify_on_state_change: vm_and_task_state\n nova::placement::auth_url: http://172.17.1.10:5000\n nova::placement::os_interface: internal\n nova::placement::os_region_name: regionOne\n nova::placement::password: 6BkAm2KjhQcBQjbmFfcxWwJUq\n nova::placement::project_name: service\n nova::placement_database_connection: mysql+pymysql://nova_placement:6BkAm2KjhQcBQjbmFfcxWwJUq@172.17.1.10/nova_placement?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n nova::purge_config: false\n nova::rabbit_heartbeat_timeout_threshold: 60\n nova::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n nova::rabbit_port: 5672\n nova::rabbit_use_ssl: \'False\'\n nova::rabbit_userid: guest\n nova::use_ipv6: false\n nova::vncproxy::common::vncproxy_host: 10.0.0.106\n nova::vncproxy::common::vncproxy_port: \'6080\'\n nova::vncproxy::common::vncproxy_protocol: http\n ntp::iburst_enable: true\n \'ntp::maxpoll:\': 10\n \'ntp::minpoll:\': 6\n ntp::servers: [clock.redhat.com]\n opendaylight::log_levels: {org.opendaylight.genius: DEBUG, org.opendaylight.netvirt: DEBUG}\n opendaylight::log_max_rollover: 50\n opendaylight::odl_rest_port: \'8081\'\n opendaylight::password: redhat\n opendaylight::username: odladmin\n opendaylight_check_url: restconf/operational/network-topology:network-topology/topology/netvirt:1\n rbd_persistent_storage: false\n snmp::agentaddress: [\'udp:161\', \'udp6:[::1]:161\']\n snmp::snmpd_options: -LS0-5d\n snmpd_network: internal_api_subnet\n sysctl_settings:\n fs.inotify.max_user_instances: {value: 1024}\n fs.suid_dumpable: {value: 0}\n kernel.dmesg_restrict: {value: 1}\n kernel.pid_max: {value: 1048576}\n net.core.netdev_max_backlog: {value: 10000}\n net.ipv4.conf.all.arp_accept: {value: 1}\n net.ipv4.conf.all.log_martians: {value: 1}\n net.ipv4.conf.all.secure_redirects: {value: 0}\n net.ipv4.conf.all.send_redirects: {value: 0}\n net.ipv4.conf.default.accept_redirects: {value: 0}\n net.ipv4.conf.default.log_martians: {value: 1}\n net.ipv4.conf.default.secure_redirects: {value: 0}\n net.ipv4.conf.default.send_redirects: {value: 0}\n net.ipv4.ip_forward: {value: 1}\n net.ipv4.neigh.default.gc_thresh1: {value: 1024}\n net.ipv4.neigh.default.gc_thresh2: {value: 2048}\n net.ipv4.neigh.default.gc_thresh3: {value: 4096}\n net.ipv4.tcp_keepalive_intvl: {value: 1}\n net.ipv4.tcp_keepalive_probes: {value: 5}\n net.ipv4.tcp_keepalive_time: {value: 5}\n net.ipv6.conf.all.accept_ra: {value: 0}\n net.ipv6.conf.all.accept_redirects: {value: 0}\n net.ipv6.conf.all.autoconf: {value: 0}\n net.ipv6.conf.all.disable_ipv6: {value: 0}\n net.ipv6.conf.default.accept_ra: {value: 0}\n net.ipv6.conf.default.accept_redirects: {value: 0}\n net.ipv6.conf.default.autoconf: {value: 0}\n net.ipv6.conf.default.disable_ipv6: {value: 0}\n net.netfilter.nf_conntrack_max: {value: 500000}\n net.nf_conntrack_max: {value: 500000}\n timezone::timezone: Europe/London\n tripleo.nova_libvirt.firewall_rules:\n 200 nova_libvirt:\n dport: [16514, 49152-49215, 5900-6923]\n tripleo.nova_migration_target.firewall_rules:\n 113 nova_migration_target:\n dport: [2022]\n tripleo.ntp.firewall_rules:\n 105 ntp: {dport: 123, proto: udp}\n tripleo.opendaylight_ovs.firewall_rules:\n 118 neutron vxlan networks: {dport: 4789, proto: udp}\n 136 neutron gre networks: {proto: gre}\n tripleo.snmp.firewall_rules:\n 124 snmp: {dport: 161, proto: udp, source: \'%{hiera(\'\'snmpd_network\'\')}\'}\n tripleo::firewall::manage_firewall: true\n tripleo::firewall::purge_firewall_rules: false\n tripleo::packages::enable_install: false\n tripleo::profile::base::certmonger_user::libvirt_postsave_cmd: \'true\'\n tripleo::profile::base::database::mysql::client::enable_ssl: false\n tripleo::profile::base::database::mysql::client::mysql_client_bind_address: internal_api\n tripleo::profile::base::database::mysql::client::ssl_ca: /etc/ipa/ca.crt\n tripleo::profile::base::docker::additional_sockets: [/var/lib/openstack/docker.sock]\n tripleo::profile::base::docker::configure_network: true\n tripleo::profile::base::docker::debug: true\n tripleo::profile::base::docker::docker_options: --log-driver=journald --signature-verification=false\n --iptables=false --live-restore\n tripleo::profile::base::docker::insecure_registries: [\'192.168.24.1:8787\']\n tripleo::profile::base::docker::network_options: --bip=172.31.0.1/24\n tripleo::profile::base::neutron::plugins::ovs::opendaylight::vhostuser_socket_group: qemu\n tripleo::profile::base::neutron::plugins::ovs::opendaylight::vhostuser_socket_user: qemu\n tripleo::profile::base::nova::compute::cinder_nfs_backend: false\n tripleo::profile::base::nova::migration::client::libvirt_enabled: true\n tripleo::profile::base::nova::migration::client::nova_compute_enabled: true\n tripleo::profile::base::nova::migration::client::ssh_port: 2022\n tripleo::profile::base::nova::migration::client::ssh_private_key: \'-----BEGIN\n RSA PRIVATE KEY-----\n\n MIIEpQIBAAKCAQEAy/LJL0ClWufF7gcL+RybBImHOdLn64kKSp8cs6xrIyZDtNod\n\n QkRRrDsAY8PnqQENTVWXbWBehLQQL2lb3frpPrAR07KsyUoO1DWOriPyUyIGpO4M\n\n Q9FREwmxPhPDJg/LDG9VgCjKrkL+yFVuIxfSF5/EkRLbfb00DHN7zs5jOtSLf7B0\n\n Amn80S1GzYkgAaubMBWZpSeAo69SmKVd1ziDuVgb4r8rZ646Jgi1ZSX3fJRbaPSk\n\n 3E4kVJpPWY2ykB9r2zyydGI3XKcsHikLNZx9bNEMdny92xxLDGnsklyErUuZ9R/3\n\n xwMnqypI8mtmz77eS/MFhMFS3fJ8okVQks8fUwIDAQABAoIBAQCbX7N1lEJlJv3b\n\n gPLWLbzLkBq9KrgU8Kouf1lWaJyWgqhCN4ji2zl9hNWfK7hpQKvpprNeWHSplKRf\n\n +lxKmMTpRSnPpeeM0ibJ9KNmd2w9eUamj9Q4NlcVseSd7mBVtuJx7r+si2cdq1x/\n\n MtZdVeBwrv8JptwgxuvIMJK50vI19iitRFQWd6Y+0HBzIqR0QZdr/kRakJTMpka2\n\n atiSUz2Bq8ybEkYQ8zna0E+fNM9I/ibB0JV7fbU3a2X7ZeLLdncaIJIt6KatFass\n\n sWZtVEHnnBa6fZscGAA0DOqJWwFYLUj2A2SBgsc6QnVqnBzQUdozgyYSjCW71Rwt\n\n x6FggtyBAoGBAP8M4DPuog/CD8MFgXBQHVJvGpQjBCVaL4zMKb+qn7vTUV4suQFZ\n\n YO2lJRz3Nst4HHVOFl9kwWCy0M+5pvaz+gebbrxND3KNm97m8U0aXOM2pbcZpBrs\n\n cVNNadNKECly38pz4xW+UinQl5ftldYjWswhiSYX9GKgtn23U+UnUjDpAoGBAMy1\n\n Mptpkt07yrN33OfQNQICIXRg7ap31bs7mHWe8Dv/kxN82WgZ51ohwTV+cTXKkATz\n\n k0rHUMpqM/9SuX/ClbBLqSU11F2TTFDFscyiOnqbaiRkqJ2M0khzasFtreLHqCTO\n\n vdV/fHvBqnF0UPQUtZAblAhAM5ETN1xNqApzOAjbAoGBAOQAw7FJRDFoH6UNF/Cq\n\n ffwCfLUvNHabz+RDY5MHWjKTr6rLuju9hgwMVUg2rBJq9q3bN97heIoUcN0yL1Ne\n\n A0enqO/Gx+d1NoGm3NI7ngw0/yHXV0AGXSzGCLOtAxO6sNsQjFIUyOi+o7Za21cK\n\n VhIkbLHUOlGtMFbke6hgZXZ5AoGBAKyB1g/ZvAXrqTnsPKCteL4khYTJWf9Z1Sdf\n\n ZW9ZbSFiktLNV3i+u5Pc9jDaSRUHiq5hhTJzHMY3EXKMh/3+QJ68Y+ITps7knl9C\n\n +j50R8uixKO+n8mFLoAXo1M11l9R2YSLJLaSJJk17yiE2OOXwBmc4/bAA7Sx+Ok0\n\n F/QWfJYZAoGAKbKbyW8pztncDaOTD2/kJzYiXHlCnctMgNP0brurD/W3iBhTXKS5\n\n R3eWDPS5LKuxswg8fF1LOj8DhwBC9k1Ssu4kbQ4O4OeCr+Hci8FeQP13s98tvzXv\n\n XtIN4KCdIvMe0XBt/ReAbdkd+lhCFzwkIG96Fv7FEsCKCsfDO4ukDp0=\n\n -----END RSA PRIVATE KEY-----\n\n \'\n tripleo::profile::base::nova::migration::target::ssh_authorized_keys: [ssh-rsa\n AAAAB3NzaC1yc2EAAAADAQABAAABAQDL8skvQKVa58XuBwv5HJsEiYc50ufriQpKnxyzrGsjJkO02h1CRFGsOwBjw+epAQ1NVZdtYF6EtBAvaVvd+uk+sBHTsqzJSg7UNY6uI/JTIgak7gxD0VETCbE+E8MmD8sMb1WAKMquQv7IVW4jF9IXn8SREtt9vTQMc3vOzmM61It/sHQCafzRLUbNiSABq5swFZmlJ4Cjr1KYpV3XOIO5WBvivytnrjomCLVlJfd8lFto9KTcTiRUmk9ZjbKQH2vbPLJ0YjdcpyweKQs1nH1s0Qx2fL3bHEsMaeySXIStS5n1H/fHAyerKkjya2bPvt5L8wWEwVLd8nyiRVCSzx9T\n Generated by TripleO]\n tripleo::profile::base::nova::migration::target::ssh_localaddrs: [\'%{hiera(\'\'cold_migration_ssh_inbound_addr\'\')}\',\n \'%{hiera(\'\'live_migration_ssh_inbound_addr\'\')}\']\n tripleo::profile::base::snmp::snmpd_password: e0e6f3b1f8575fd51ee080d6b2724feef235ed7e\n tripleo::profile::base::snmp::snmpd_user: ro_snmp_user\n tripleo::profile::base::sshd::bannertext: \'\'\n tripleo::profile::base::sshd::motd: \'\'\n tripleo::profile::base::sshd::options:\n AcceptEnv: [LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES,\n LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT, LC_IDENTIFICATION\n LC_ALL LANGUAGE, XMODIFIERS]\n AuthorizedKeysFile: .ssh/authorized_keys\n ChallengeResponseAuthentication: \'no\'\n GSSAPIAuthentication: \'yes\'\n GSSAPICleanupCredentials: \'no\'\n HostKey: [/etc/ssh/ssh_host_rsa_key, /etc/ssh/ssh_host_ecdsa_key, /etc/ssh/ssh_host_ed25519_key]\n PasswordAuthentication: \'no\'\n Subsystem: sftp /usr/libexec/openssh/sftp-server\n SyslogFacility: AUTHPRIV\n UseDNS: \'no\'\n UsePAM: \'yes\'\n UsePrivilegeSeparation: sandbox\n X11Forwarding: \'yes\'\n tripleo::profile::base::sshd::port: 22\n tripleo::profile::base::tuned::profile: \'\'\n tripleo::trusted_cas::ca_map: {}\n vswitch::dpdk::driver_type: vfio-pci\n vswitch::dpdk::host_core_list: \'\'\n vswitch::dpdk::memory_channels: \'4\'\n vswitch::dpdk::pmd_core_list: \'\'\n vswitch::dpdk::socket_mem: \'\'\n vswitch::ovs::enable_hw_offload: false\n role_data_monitoring_subscriptions: []\n role_data_post_update_tasks:\n - block:\n - name: store update level to update_level variable\n set_fact: {odl_update_level: 1}\n - block:\n - {command: systemctl is-active --quiet openvswitch, name: Check service openvswitch\n is running, register: openvswitch_running, tags: common}\n - {name: Delete OVS groups and ports, shell: sudo ovs-ofctl -O Openflow13 del-groups\n br-int; for tun_port in $(ovs-vsctl list-ports br-int | grep \'tun\'); do;\n ovs-vsctl del-port br-int $(tun_port); done;, when: (step|int == 0) and\n (openvswitch_running.rc == 0)}\n - {name: Stop openvswitch service, service: name=openvswitch state=stopped,\n when: (step|int == 1) and (openvswitch_running.rc == 0)}\n - iptables: chain=OUTPUT action=insert protocol=tcp destination_port={{ item\n }} jump=DROP state=absent\n name: Unblock OVS port per compute node.\n when: step|int == 2\n with_items: [6640, 6653, 6633]\n - {name: start openvswitch service, service: name=openvswitch state=started,\n when: step|int == 3}\n when: odl_update_level == 2\n role_data_post_upgrade_tasks:\n - {command: systemctl is-active --quiet openvswitch, name: Check service openvswitch\n is running, register: openvswitch_running, tags: common}\n - {name: Delete OVS groups and ports, shell: sudo ovs-ofctl -O Openflow13 del-groups\n br-int; for tun_port in $(ovs-vsctl list-ports br-int | grep \'tun\'); do; ovs-vsctl\n del-port br-int $(tun_port); done;, when: (step|int == 0) and (openvswitch_running.rc\n == 0)}\n - {name: Stop openvswitch service, service: name=openvswitch state=stopped, when: (step|int\n == 1) and (openvswitch_running.rc == 0)}\n - iptables: chain=OUTPUT action=insert protocol=tcp destination_port={{ item }}\n jump=DROP state=absent\n name: Unblock OVS port per compute node.\n when: step|int == 2\n with_items: [6640, 6653, 6633]\n - {name: start openvswitch service, service: name=openvswitch state=started, when: step|int\n == 3}\n role_data_pre_upgrade_rolling_tasks: []\n role_data_puppet_config:\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-07-13.1\',\n config_volume: ceilometer, puppet_tags: ceilometer_config, step_config: \'include\n ::tripleo::profile::base::ceilometer::agent::polling\n\n \'}\n - config_image: 192.168.24.1:8787/rhosp13/openstack-iscsid:2018-07-13.1\n config_volume: iscsid\n puppet_tags: iscsid_config\n step_config: include ::tripleo::profile::base::iscsid\n volumes: [\'/etc/iscsi:/etc/iscsi\']\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-compute:2018-07-13.1\',\n config_volume: nova_libvirt, puppet_tags: \'nova_config,nova_paste_api_ini\',\n step_config: \'# TODO(emilien): figure how to deal with libvirt profile.\n\n # We\'\'ll probably treat it like we do with Neutron plugins.\n\n # Until then, just include it in the default nova-compute role.\n\n include tripleo::profile::base::nova::compute::libvirt\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-compute:2018-07-13.1\',\n config_volume: nova_libvirt, puppet_tags: \'libvirtd_config,nova_config,file,libvirt_tls_password\',\n step_config: \'include tripleo::profile::base::nova::libvirt\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-compute:2018-07-13.1\',\n config_volume: nova_libvirt, step_config: \'include ::tripleo::profile::base::sshd\n\n include tripleo::profile::base::nova::migration::target\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-cron:2018-07-13.1\', config_volume: crond,\n step_config: \'include ::tripleo::profile::base::logging::logrotate\'}\n role_data_service_config_settings: {}\n role_data_service_metadata_settings: null\n role_data_service_names: [ca_certs, ceilometer_agent_compute, docker, iscsid,\n kernel, mysql_client, nova_compute, nova_libvirt, nova_migration_target, ntp,\n logrotate_crond, opendaylight_ovs, snmp, sshd, timezone, tripleo_firewall, tripleo_packages,\n tuned]\n role_data_step_config: "# Copyright 2014 Red Hat, Inc.\\n# All Rights Reserved.\\n\\\n #\\n# Licensed under the Apache License, Version 2.0 (the \\"License\\"); you may\\n\\\n # not use this file except in compliance with the License. You may obtain\\n\\\n # a copy of the License at\\n#\\n# http://www.apache.org/licenses/LICENSE-2.0\\n\\\n #\\n# Unless required by applicable law or agreed to in writing, software\\n#\\\n \\ distributed under the License is distributed on an \\"AS IS\\" BASIS, WITHOUT\\n\\\n # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\\n\\\n # License for the specific language governing permissions and limitations\\n\\\n # under the License.\\n\\n# Common config, from tripleo-heat-templates/puppet/manifests/overcloud_common.pp\\n\\\n # The content of this file will be used to generate\\n# the puppet manifests\\\n \\ for all roles, the placeholder\\n# Compute will be replaced by \'controller\',\\\n \\ \'blockstorage\',\\n# \'cephstorage\' and all the deployed roles.\\n\\nif hiera(\'step\')\\\n \\ >= 4 {\\n hiera_include(\'Compute_classes\', [])\\n}\\n\\n$package_manifest_name\\\n \\ = join([\'/var/lib/tripleo/installed-packages/overcloud_Compute\', hiera(\'step\')])\\n\\\n package_manifest{$package_manifest_name: ensure => present}\\n\\n# End of overcloud_common.pp\\n\\\n \\ninclude ::tripleo::trusted_cas\\ninclude ::tripleo::profile::base::docker\\n\\\n \\ninclude ::tripleo::profile::base::kernel\\ninclude ::tripleo::profile::base::database::mysql::client\\n\\\n include ::tripleo::profile::base::time::ntp\\ninclude tripleo::profile::base::neutron::plugins::ovs::opendaylight\\n\\\n \\ninclude ::tripleo::profile::base::snmp\\n\\ninclude ::tripleo::profile::base::sshd\\n\\\n \\ninclude ::timezone\\ninclude ::tripleo::firewall\\n\\ninclude ::tripleo::packages\\n\\\n \\ninclude ::tripleo::profile::base::tuned"\n role_data_update_tasks:\n - block:\n - {failed_when: false, name: Detect if puppet on the docker profile would restart\n the service, register: puppet_docker_noop_output, shell: "puppet apply --noop\\\n \\ --summarize --detailed-exitcodes --verbose \\\\\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules\\\n \\ \\\\\\n --color=false -e \\"class { \'tripleo::profile::base::docker\': step\\\n \\ => 1, }\\" 2>&1 | \\\\\\nawk -F \\":\\" \'/Out of sync:/ { print $2}\'\\n"}\n - {changed_when: docker_check_update.rc == 100, failed_when: \'docker_check_update.rc\n not in [0, 100]\', name: Is docker going to be updated, register: docker_check_update,\n shell: yum check-update docker}\n - {name: Set docker_rpm_needs_update fact, set_fact: \'docker_rpm_needs_update={{\n docker_check_update.rc == 100 }}\'}\n - {name: Set puppet_docker_is_outofsync fact, set_fact: \'puppet_docker_is_outofsync={{\n puppet_docker_noop_output.stdout|trim|int >= 1 }}\'}\n - {name: Stop all containers, shell: docker ps -q | xargs --no-run-if-empty\n -n1 docker stop, when: puppet_docker_is_outofsync or docker_rpm_needs_update}\n - name: Stop docker\n service: {name: docker, state: stopped}\n when: puppet_docker_is_outofsync or docker_rpm_needs_update\n - {name: Update the docker package, when: docker_rpm_needs_update, yum: name=docker\n state=latest update_cache=yes}\n - {changed_when: puppet_docker_apply.rc == 2, failed_when: \'puppet_docker_apply.rc\n not in [0, 2]\', name: Apply puppet which will start the service again, register: puppet_docker_apply,\n shell: "puppet apply --detailed-exitcodes --verbose \\\\\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules\\\n \\ \\\\\\n -e \\"class { \'tripleo::profile::base::docker\': step => 1, }\\"\\n"}\n when: step|int == 2\n - block:\n - name: store update level to update_level variable\n set_fact: {odl_update_level: 1}\n name: Get ODL update level\n - block:\n - iptables: chain=OUTPUT action=insert protocol=tcp destination_port={{ item\n }} jump=DROP\n name: Block connections to ODL.\n when: step|int == 0\n with_items: [6640, 6653, 6633]\n name: Run L2 update tasks that are similar to upgrade_tasks when update level\n is 2\n when: odl_update_level == 2\n - {name: Check for existing yum.pid, register: yum_pid_file, stat: path=/var/run/yum.pid,\n when: step|int == 0 or step|int == 3}\n - {fail: msg="ERROR existing yum.pid detected - can\'t continue! Please ensure\n there is no other package update process for the duration of the minor update\n worfklow. Exiting.", name: Exit if existing yum process, when: (step|int ==\n 0 or step|int == 3) and yum_pid_file.stat.exists}\n - {name: Update all packages, when: step == "3", yum: name=* state=latest update_cache=yes}\n role_data_upgrade_batch_tasks: []\n role_data_upgrade_tasks:\n - {command: systemctl is-enabled --quiet openstack-ceilometer-compute, ignore_errors: true,\n name: Check if openstack-ceilometer-compute is deployed, register: openstack_ceilometer_compute_enabled,\n tags: common}\n - {command: systemctl is-enabled --quiet openstack-ceilometer-polling, ignore_errors: true,\n name: Check if openstack-ceilometer-polling is deployed, register: openstack_ceilometer_polling_enabled,\n tags: common}\n - command: systemctl is-active --quiet openstack-ceilometer-compute\n name: \'PreUpgrade step0,validation: Check service openstack-ceilometer-compute\n is running\'\n tags: validation\n when: [step|int == 0, openstack_ceilometer_compute_enabled.rc == 0]\n - command: systemctl is-active --quiet openstack-ceilometer-polling\n name: \'PreUpgrade step0,validation: Check service openstack-ceilometer-polling\n is running\'\n tags: validation\n when: [step|int == 0, openstack_ceilometer_polling_enabled.rc == 0]\n - name: Stop and disable ceilometer compute agent\n service: name=openstack-ceilometer-compute state=stopped enabled=no\n when: [step|int == 2, openstack_ceilometer_compute_enabled.rc|default(\'\') ==\n 0]\n - name: Stop and disable ceilometer polling agent\n service: name=openstack-ceilometer-polling state=stopped enabled=no\n when: [step|int == 2, openstack_ceilometer_polling_enabled.rc|default(\'\') ==\n 0]\n - name: Set fact for removal of openstack-ceilometer-compute and polling package\n set_fact: {remove_ceilometer_compute_polling_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-ceilometer-compute package if operator requests it\n when: [step|int == 2, remove_ceilometer_compute_polling_package|bool]\n yum: name=openstack-ceilometer-compute state=removed\n - ignore_errors: true\n name: Remove openstack-ceilometer-polling package if operator requests it\n when: [step|int == 2, remove_ceilometer_compute_polling_package|bool]\n yum: name=openstack-ceilometer-polling state=removed\n - {name: Install docker packages on upgrade if missing, when: step|int == 3, yum: name=docker\n state=latest}\n - {command: systemctl is-enabled --quiet iscsid, ignore_errors: true, name: Check\n if iscsid service is deployed, register: iscsid_enabled, tags: common}\n - command: systemctl is-active --quiet iscsid\n name: \'PreUpgrade step0,validation: Check if iscsid is running\'\n tags: validation\n when: [step|int == 0, iscsid_enabled.rc == 0]\n - name: Stop and disable iscsid service\n service: name=iscsid state=stopped enabled=no\n when: [step|int == 2, iscsid_enabled.rc == 0]\n - {command: systemctl is-enabled --quiet iscsid.socket, ignore_errors: true, name: Check\n if iscsid.socket service is deployed, register: iscsid_socket_enabled, tags: common}\n - command: systemctl is-active --quiet iscsid.socket\n name: \'PreUpgrade step0,validation: Check if iscsid.socket is running\'\n tags: validation\n when: [step|int == 0, iscsid_socket_enabled.rc == 0]\n - name: Stop and disable iscsid.socket service\n service: name=iscsid.socket state=stopped enabled=no\n when: [step|int == 2, iscsid_socket_enabled.rc == 0]\n - {command: systemctl is-enabled --quiet openstack-nova-compute, ignore_errors: true,\n name: Check if nova_compute is deployed, register: nova_compute_enabled, tags: common}\n - {ini_file: dest=/etc/nova/nova.conf section=upgrade_levels option=compute value=,\n name: Set compute upgrade level to auto, when: step|int == 1}\n - command: systemctl is-active --quiet openstack-nova-compute\n name: \'PreUpgrade step0,validation: Check service openstack-nova-compute is\n running\'\n tags: validation\n when: [step|int == 0, nova_compute_enabled.rc == 0]\n - name: Stop and disable nova-compute service\n service: name=openstack-nova-compute state=stopped enabled=no\n when: [step|int == 2, nova_compute_enabled.rc == 0]\n - name: Set fact for removal of openstack-nova-compute package\n set_fact: {remove_nova_compute_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-nova-compute package if operator requests it\n when: [step|int == 2, remove_nova_compute_package|bool]\n yum: name=openstack-nova-compute state=removed\n - {command: systemctl is-enabled --quiet libvirtd, ignore_errors: true, name: Check\n if nova_libvirt is deployed, register: nova_libvirt_enabled, tags: common}\n - command: systemctl is-active --quiet libvirtd\n name: \'PreUpgrade step0,validation: Check service libvirtd is running\'\n tags: validation\n when: [step|int == 0, nova_libvirt_enabled.rc == 0]\n - name: Stop and disable libvirtd service\n service: name=libvirtd state=stopped enabled=no\n when: [step|int == 2, nova_libvirt_enabled.rc == 0]\n - {ignore_errors: true, name: Check openvswitch version., register: ovs_version,\n shell: \'rpm -qa | awk -F- \'\'/^openvswitch-2/{print $2 "-" $3}\'\'\', when: step|int\n == 2}\n - {ignore_errors: true, name: Check openvswitch packaging., register: ovs_packaging_issue,\n shell: \'rpm -q --scripts openvswitch | awk \'\'/postuninstall/,/*/\'\' | grep -q\n "systemctl.*try-restart"\', when: step|int == 2}\n - block:\n - file: {path: /root/OVS_UPGRADE, state: absent}\n name: \'Ensure empty directory: emptying.\'\n - file: {group: root, mode: 488, owner: root, path: /root/OVS_UPGRADE, state: directory}\n name: \'Ensure empty directory: creating.\'\n - {command: yum makecache, name: Make yum cache.}\n - {command: yumdownloader --destdir /root/OVS_UPGRADE --resolve openvswitch,\n name: Download OVS packages.}\n - {name: Get rpm list for manual upgrade of OVS., register: ovs_list_of_rpms,\n shell: ls -1 /root/OVS_UPGRADE/*.rpm}\n - args: {chdir: /root/OVS_UPGRADE}\n name: Manual upgrade of OVS\n shell: \'rpm -U --test {{item}} 2>&1 | grep "already installed" || \\\n\n rpm -U --replacepkgs --notriggerun --nopostun {{item}};\n\n \'\n with_items: [\'{{ovs_list_of_rpms.stdout_lines}}\']\n when: [step|int == 2, \'\'\'2.5.0-14\'\' in ovs_version.stdout|default(\'\'\'\') or ovs_packaging_issue|default(false)|succeeded\']\n - {command: systemctl is-enabled openvswitch, ignore_errors: true, name: Check\n if openvswitch is deployed, register: openvswitch_enabled, tags: common}\n - command: systemctl is-active --quiet openvswitch\n name: \'PreUpgrade step0,validation: Check service openvswitch is running\'\n tags: validation\n when: [step|int == 0, openvswitch_enabled.rc == 0]\n - name: Stop openvswitch service\n service: name=openvswitch state=stopped\n when: [step|int == 1, openvswitch_enabled.rc == 0]\n - block:\n - iptables: chain=OUTPUT action=insert protocol=tcp destination_port={{ item\n }} jump=DROP\n name: Block connections to ODL.\n when: step|int == 0\n with_items: [6640, 6653, 6633]\n name: ODL container L2 update and upgrade tasks\n - {name: Stop snmp service, service: name=snmpd state=stopped, when: step|int\n == 1}\n - args: {creates: /etc/sysconfig/ip6tables.n-o-upgrade}\n name: blank ipv6 rule before activating ipv6 firewall.\n shell: cat /etc/sysconfig/ip6tables > /etc/sysconfig/ip6tables.n-o-upgrade;\n cat</dev/null>/etc/sysconfig/ip6tables\n when: step|int == 3\n - {name: Check yum for rpm-python present, register: rpm_python_check, when: step|int\n == 0, yum: name=rpm-python state=present}\n - fail: msg="rpm-python package was not present before this run! Check environment\n before re-running"\n name: Fail when rpm-python wasn\'t present\n when: [step|int == 0, rpm_python_check.changed != false]\n - {name: Check for os-net-config upgrade, register: os_net_config_need_upgrade,\n shell: \'yum check-upgrade | awk \'\'/os-net-config/{print}\'\'\', when: step|int\n == 3}\n - {ignore_errors: true, name: Check that os-net-config has configuration, register: os_net_config_has_config,\n shell: test -s /etc/os-net-config/config.json, when: step|int == 3}\n - block:\n - {name: Upgrade os-net-config, yum: name=os-net-config state=latest}\n - {changed_when: os_net_config_upgrade.rc == 2, command: os-net-config --no-activate\n -c /etc/os-net-config/config.json -v --detailed-exit-codes, failed_when: \'os_net_config_upgrade.rc\n not in [0,2]\', name: take new os-net-config parameters into account now,\n register: os_net_config_upgrade}\n when: [step|int == 3, os_net_config_need_upgrade.stdout, os_net_config_has_config.rc\n == 0]\n - {name: Update all packages, when: step|int == 3, yum: name=* state=latest}\n role_data_workflow_tasks: {}\n role_name: Compute\novercloud:\n children:\n Compute: {}\n Controller: {}\n vars: {ctlplane_vip: 192.168.24.10, external_vip: 10.0.0.106, internal_api_vip: 172.17.1.10,\n redis_vip: 172.17.1.17, storage_mgmt_vip: 172.17.4.17, storage_vip: 172.17.3.10}\naodh_evaluator:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nkernel:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nneutron_metadata:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\npacemaker:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_placement:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nsnmp:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nheat_api:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ncinder_api:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nswift_proxy:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\naodh_listener:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nswift_ringbuilder:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nneutron_dhcp:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ngnocchi_api:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ntimezone:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nceilometer_agent_central:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nheat_api_cloudwatch_disabled:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nneutron_plugin_ml2_odl:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\naodh_notifier:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ntripleo_firewall:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nswift_storage:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nredis:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ngnocchi_statsd:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\niscsid:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_conductor:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nmysql_client:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_consoleauth:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nglance_api:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nkeystone:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ncinder_volume:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nopendaylight_ovs:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nceilometer_collector_disabled:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nceilometer_agent_notification:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nmemcached:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nmongodb_disabled:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_api:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\naodh_api:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_metadata:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nheat_engine:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nntp:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nceilometer_expirer_disabled:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nceilometer_api_disabled:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_migration_target:\n children:\n Compute: {}\n vars: {ansible_ssh_user: heat-admin}\ncinder_scheduler:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ngnocchi_metricd:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ntripleo_packages:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_scheduler:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_compute:\n children:\n Compute: {}\n vars: {ansible_ssh_user: heat-admin}\nopendaylight_api:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nlogrotate_crond:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nhaproxy:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nsshd:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nmysql:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nceilometer_agent_compute:\n children:\n Compute: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_libvirt:\n children:\n Compute: {}\n vars: {ansible_ssh_user: heat-admin}\nrabbitmq:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ntuned:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\npanko_api:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nhorizon:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nneutron_api:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nca_certs:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nheat_api_cfn:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ndocker:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_vnc_proxy:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nclustercheck:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nglance_registry_disabled:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\n_meta:\n hostvars: {}\n', u'work_dir': u'/var/lib/mistral', u'verbosity': 1, u'skip_tags': u'', u'playbook': u'update_steps_playbook.yaml', u'ansible_extra_env_variables': {u'ANSIBLE_HOST_KEY_CHECKING': u'False', u'ANSIBLE_LOG_PATH': u'/var/log/mistral/package_update.log'}, u'module_path': u'/usr/share/ansible-modules', u'nodes': u'Controller', u'node_user': u'heat-admin', u'ansible_queue_name': u'update'}, u'id': u'c640f815-1b36-4d4b-aacc-4946202eab6f'}} >Waiting for messages on queue 'update' with no timeout. >Update failed with: {u'status': u'FAILED', u'execution': {u'name': u'tripleo.package_update.v1.update_nodes', u'created_at': u'2018-08-02 15:33:10', u'updated_at': u'2018-08-02 15:33:10', u'spec': {u'tasks': {u'node_update': {u'name': u'node_update', u'on-error': u'node_update_failed', u'on-success': [{u'node_update_passed': u'<% task().result.returncode = 0 %>'}, {u'node_update_failed': u'<% task().result.returncode != 0 %>'}], u'publish': {u'output': u'<% task().result %>'}, u'version': u'2.0', u'action': u'tripleo.ansible-playbook', u'input': {u'remote_user': u'<% $.node_user %>', u'become_user': u'root', u'ssh_private_key': u'<% $.private_key %>', u'verbosity': u'<% $.verbosity %>', u'queue_name': u'<% $.ansible_queue_name %>', u'extra_env_variables': u'<% $.ansible_extra_env_variables %>', u'skip_tags': u'<% $.skip_tags %>', u'inventory': u'<% $.inventory_file %>', u'execution_id': u'<% execution().id %>', u'module_path': u'<% $.module_path %>', u'become': True, u'trash_output': True, u'limit_hosts': u'<% $.nodes %>', u'playbook': u'<% $.work_dir %>/<% execution().id %>/<% $.playbook %>'}, u'type': u'direct'}, u'get_private_key': {u'name': u'get_private_key', u'on-success': u'node_update', u'publish': {u'private_key': u'<% task().result %>'}, u'version': u'2.0', u'action': u'tripleo.validations.get_privkey', u'type': u'direct'}, u'node_update_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'node_update_failed', u'publish': {u'status': u'FAILED', u'message': u'Failed to update nodes - <% $.nodes %>, please see the logs.'}, u'on-success': u'notify_zaqar'}, u'node_update_passed': {u'version': u'2.0', u'type': u'direct', u'name': u'node_update_passed', u'publish': {u'status': u'SUCCESS', u'message': u'Updated nodes - <% $.nodes %>'}, u'on-success': u'notify_zaqar'}, u'notify_zaqar': {u'retry': u'count=5 delay=1', u'name': u'notify_zaqar', u'on-success': [{u'fail': u'<% $.get(\'status\') = "FAILED" %>'}], u'version': u'2.0', u'action': u'zaqar.queue_post', u'input': {u'queue_name': u'<% $.ansible_queue_name %>', u'messages': {u'body': {u'type': u'tripleo.package_update.v1.update_nodes', u'payload': {u'status': u'<% $.status %>', u'execution': u'<% execution() %>'}}}}, u'type': u'direct'}, u'download_config': {u'name': u'download_config', u'on-error': u'node_update_failed', u'on-success': u'get_private_key', u'version': u'2.0', u'action': u'tripleo.config.download_config', u'input': {u'work_dir': u'<% $.work_dir %>/<% execution().id %>'}, u'type': u'direct'}}, u'name': u'update_nodes', u'tags': [u'tripleo-common-managed'], u'version': u'2.0', u'input': [{u'node_user': u'heat-admin'}, u'nodes', u'playbook', u'inventory_file', {u'ansible_queue_name': u'tripleo'}, {u'module_path': u'/usr/share/ansible-modules'}, {u'ansible_extra_env_variables': {u'ANSIBLE_HOST_KEY_CHECKING': u'False', u'ANSIBLE_LOG_PATH': u'/var/log/mistral/package_update.log'}}, {u'verbosity': 1}, {u'work_dir': u'/var/lib/mistral'}, {u'skip_tags': u''}], u'description': u'Take a container and perform an update nodes by nodes'}, u'params': {u'namespace': u'', u'env': {}}, u'input': {u'inventory_file': u'undercloud:\n hosts:\n localhost: {}\n vars:\n ansible_connection: local\n ansible_remote_tmp: /tmp/ansible-${USER}\n auth_url: https://192.168.24.2:13000/\n cacert: null\n os_auth_token: gAAAAABbYyQwcqm_KZ3Mg8xDAsg71IlxG93IOzByINN0KeI6Vc83T2jULTYIp6zMP7pKZUesaUQ-lTHxv1i4rwEoAgQnfXF8BAFyqLTW8h2RfCd28nuoOdzHgs7JA5FVn13hQCTvT3AY3ON2pt5fwVTG7WoONSyKuhj2-3L5O5vBi0WeZRcjIUs\n overcloud_admin_password: XxK3Mh947xh2TVyaJJWb7myna\n overcloud_horizon_url: http://10.0.0.106:80/dashboard\n overcloud_keystone_url: http://10.0.0.106:5000/\n plan: overcloud\n project_name: admin\n undercloud_service_list: [openstack-nova-compute, openstack-heat-engine, openstack-ironic-conductor,\n openstack-swift-container, openstack-swift-object, openstack-mistral-engine]\n undercloud_swift_url: https://192.168.24.2:13808/v1/AUTH_aed387cf82184fb788209f67beef84fe\n username: admin\ncontroller-0:\n hosts:\n 192.168.24.9: {}\n vars:\n ctlplane_ip: 192.168.24.9\n deploy_server_id: 0d25b3fa-5154-47be-9ced-05bdd8d3ca43\n enabled_networks: [management, storage, ctlplane, external, internal_api, storage_mgmt,\n tenant]\n external_ip: 10.0.0.109\n internal_api_ip: 172.17.1.14\n management_ip: 192.168.24.9\n storage_ip: 172.17.3.21\n storage_mgmt_ip: 172.17.4.10\n tenant_ip: 172.17.2.18\ncontroller-1:\n hosts:\n 192.168.24.8: {}\n vars:\n ctlplane_ip: 192.168.24.8\n deploy_server_id: 629a806d-c1d7-41c2-aafb-90a857fb3598\n enabled_networks: [management, storage, ctlplane, external, internal_api, storage_mgmt,\n tenant]\n external_ip: 10.0.0.108\n internal_api_ip: 172.17.1.11\n management_ip: 192.168.24.8\n storage_ip: 172.17.3.20\n storage_mgmt_ip: 172.17.4.16\n tenant_ip: 172.17.2.15\ncontroller-2:\n hosts:\n 192.168.24.11: {}\n vars:\n ctlplane_ip: 192.168.24.11\n deploy_server_id: 9ed2aa47-631c-4e3e-b4d6-29ba5af7602a\n enabled_networks: [management, storage, ctlplane, external, internal_api, storage_mgmt,\n tenant]\n external_ip: 10.0.0.103\n internal_api_ip: 172.17.1.16\n management_ip: 192.168.24.11\n storage_ip: 172.17.3.19\n storage_mgmt_ip: 172.17.4.20\n tenant_ip: 172.17.2.13\nController:\n children:\n controller-0: {}\n controller-1: {}\n controller-2: {}\n vars:\n ansible_ssh_user: heat-admin\n bootstrap_server_id: 0d25b3fa-5154-47be-9ced-05bdd8d3ca43\n role_data_cellv2_discovery: false\n role_data_config_settings: {}\n role_data_deploy_steps_tasks: []\n role_data_docker_config:\n step_1:\n cinder_volume_image_tag:\n command: [/bin/bash, -c, \'/usr/bin/docker tag \'\'192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-07-13.1\'\'\n \'\'192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest\'\'\']\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/dev/shm:/dev/shm:rw\', \'/etc/sysconfig/docker:/etc/sysconfig/docker:ro\',\n \'/usr/bin:/usr/bin:ro\', \'/var/run/docker.sock:/var/run/docker.sock:rw\']\n haproxy_image_tag:\n command: [/bin/bash, -c, \'/usr/bin/docker tag \'\'192.168.24.1:8787/rhosp13/openstack-haproxy:2018-07-13.1\'\'\n \'\'192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest\'\'\']\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-haproxy:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/dev/shm:/dev/shm:rw\', \'/etc/sysconfig/docker:/etc/sysconfig/docker:ro\',\n \'/usr/bin:/usr/bin:ro\', \'/var/run/docker.sock:/var/run/docker.sock:rw\']\n memcached:\n command: [/bin/bash, -c, \'source /etc/sysconfig/memcached; /usr/bin/memcached\n -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS\']\n image: 192.168.24.1:8787/rhosp13/openstack-memcached:2018-07-13.1\n net: host\n privileged: false\n restart: always\n start_order: 0\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro\']\n mysql_bootstrap:\n command: [bash, -ec, \'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\n\n echo -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\n\n kolla_set_configs\n\n sudo -u mysql -E kolla_extend_start\n\n mysqld_safe --skip-networking --wsrep-on=OFF &\n\n timeout ${DB_MAX_TIMEOUT} /bin/bash -c \'\'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}"\n ping 2>/dev/null; do sleep 1; done\'\'\n\n mysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'\'clustercheck\'\'@\'\'localhost\'\'\n IDENTIFIED BY \'\'${DB_CLUSTERCHECK_PASSWORD}\'\';"\n\n mysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'\'clustercheck\'\'@\'\'localhost\'\'\n WITH GRANT OPTION;"\n\n timeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}"\n shutdown\']\n detach: false\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS, KOLLA_BOOTSTRAP=True, DB_MAX_TIMEOUT=60,\n DB_CLUSTERCHECK_PASSWORD=Y842JReAdAaXZwRHfsjTtdqgg, DB_ROOT_PASSWORD=7xm4XA2YHK]\n image: 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json\',\n \'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro\',\n \'/var/lib/mysql:/var/lib/mysql\']\n mysql_data_ownership:\n command: [chown, -R, \'mysql:\', /var/lib/mysql]\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\n net: host\n start_order: 0\n user: root\n volumes: [\'/var/lib/mysql:/var/lib/mysql\']\n mysql_image_tag:\n command: [/bin/bash, -c, \'/usr/bin/docker tag \'\'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\'\'\n \'\'192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest\'\'\']\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\n net: host\n start_order: 2\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/dev/shm:/dev/shm:rw\', \'/etc/sysconfig/docker:/etc/sysconfig/docker:ro\',\n \'/usr/bin:/usr/bin:ro\', \'/var/run/docker.sock:/var/run/docker.sock:rw\']\n opendaylight_api:\n detach: true\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-opendaylight:2018-07-13.1\n net: host\n privileged: false\n restart: unless-stopped\n start_order: 0\n user: odl\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/opendaylight_api.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/opendaylight/:/var/lib/kolla/config_files/src:ro\',\n \'/var/lib/opendaylight/journal:/opt/opendaylight/journal\', \'/var/lib/opendaylight/snapshots:/opt/opendaylight/snapshots\',\n \'/var/lib/opendaylight/data:/opt/opendaylight/data\', \'/var/log/containers/opendaylight:/opt/opendaylight/data/log\']\n rabbitmq_bootstrap:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS, KOLLA_BOOTSTRAP=True, RABBITMQ_CLUSTER_COOKIE=wMGzfECCXTCuVVgpTMBH]\n image: 192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-07-13.1\n net: host\n privileged: false\n start_order: 0\n volumes: [\'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro\',\n \'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\', \'/var/lib/rabbitmq:/var/lib/rabbitmq\']\n rabbitmq_image_tag:\n command: [/bin/bash, -c, \'/usr/bin/docker tag \'\'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-07-13.1\'\'\n \'\'192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest\'\'\']\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/dev/shm:/dev/shm:rw\', \'/etc/sysconfig/docker:/etc/sysconfig/docker:ro\',\n \'/usr/bin:/usr/bin:ro\', \'/var/run/docker.sock:/var/run/docker.sock:rw\']\n redis_image_tag:\n command: [/bin/bash, -c, \'/usr/bin/docker tag \'\'192.168.24.1:8787/rhosp13/openstack-redis:2018-07-13.1\'\'\n \'\'192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest\'\'\']\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-redis:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/dev/shm:/dev/shm:rw\', \'/etc/sysconfig/docker:/etc/sysconfig/docker:ro\',\n \'/usr/bin:/usr/bin:ro\', \'/var/run/docker.sock:/var/run/docker.sock:rw\']\n step_2:\n aodh_init_log:\n command: [/bin/bash, -c, \'chown -R aodh:aodh /var/log/aodh\']\n image: 192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-07-13.1\n user: root\n volumes: [\'/var/log/containers/aodh:/var/log/aodh\', \'/var/log/containers/httpd/aodh-api:/var/log/httpd\']\n cinder_api_init_logs:\n command: [/bin/bash, -c, \'chown -R cinder:cinder /var/log/cinder\']\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-07-13.1\n privileged: false\n user: root\n volumes: [\'/var/log/containers/cinder:/var/log/cinder\', \'/var/log/containers/httpd/cinder-api:/var/log/httpd\']\n cinder_scheduler_init_logs:\n command: [/bin/bash, -c, \'chown -R cinder:cinder /var/log/cinder\']\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-scheduler:2018-07-13.1\n privileged: false\n user: root\n volumes: [\'/var/log/containers/cinder:/var/log/cinder\']\n clustercheck:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\n net: host\n restart: always\n start_order: 1\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json\',\n \'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro\',\n \'/var/lib/mysql:/var/lib/mysql\']\n create_dnsmasq_wrapper:\n command: [/docker_puppet_apply.sh, \'4\', file, \'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers\']\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-neutron-dhcp-agent:2018-07-13.1\n net: host\n pid: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro\',\n \'/etc/puppet:/tmp/puppet-etc:ro\', \'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro\',\n \'/run/openvswitch:/run/openvswitch\', \'/var/lib/neutron:/var/lib/neutron\']\n glance_init_logs:\n command: [/bin/bash, -c, \'chown -R glance:glance /var/log/glance\']\n image: 192.168.24.1:8787/rhosp13/openstack-glance-api:2018-07-13.1\n privileged: false\n user: root\n volumes: [\'/var/log/containers/glance:/var/log/glance\']\n gnocchi_init_lib:\n command: [/bin/bash, -c, \'chown -R gnocchi:gnocchi /var/lib/gnocchi\']\n image: 192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-07-13.1\n user: root\n volumes: [\'/var/lib/gnocchi:/var/lib/gnocchi:rw\']\n gnocchi_init_log:\n command: [/bin/bash, -c, \'chown -R gnocchi:gnocchi /var/log/gnocchi\']\n image: 192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-07-13.1\n user: root\n volumes: [\'/var/log/containers/gnocchi:/var/log/gnocchi\', \'/var/log/containers/httpd/gnocchi-api:/var/log/httpd\']\n haproxy_init_bundle:\n command: [/docker_puppet_apply.sh, \'2\', \'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation\',\n \'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle\',\n --debug]\n detach: false\n environment: [TRIPLEO_DEPLOY_IDENTIFIER=1532514654]\n image: 192.168.24.1:8787/rhosp13/openstack-haproxy:2018-07-13.1\n net: host\n privileged: true\n start_order: 3\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro\',\n \'/etc/puppet:/tmp/puppet-etc:ro\', \'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro\',\n \'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro\', \'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro\',\n \'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro\', \'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro\',\n \'/etc/sysconfig:/etc/sysconfig:rw\', \'/usr/libexec/iptables:/usr/libexec/iptables:ro\',\n \'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\']\n haproxy_restart_bundle:\n command: [/usr/bin/bootstrap_host_exec, haproxy, if /usr/sbin/pcs resource\n show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600\n haproxy-bundle; echo "haproxy-bundle restart invoked"; fi]\n config_volume: haproxy\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-haproxy:2018-07-13.1\n net: host\n start_order: 2\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\',\n \'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro\']\n heat_init_log:\n command: [/bin/bash, -c, \'chown -R heat:heat /var/log/heat\']\n image: 192.168.24.1:8787/rhosp13/openstack-heat-engine:2018-07-13.1\n user: root\n volumes: [\'/var/log/containers/heat:/var/log/heat\']\n horizon_fix_perms:\n command: [/bin/bash, -c, \'touch /var/log/horizon/horizon.log && chown -R\n apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard\']\n image: 192.168.24.1:8787/rhosp13/openstack-horizon:2018-07-13.1\n user: root\n volumes: [\'/var/log/containers/horizon:/var/log/horizon\', \'/var/log/containers/httpd/horizon:/var/log/httpd\',\n \'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard\']\n keystone_init_log:\n command: [/bin/bash, -c, \'chown -R keystone:keystone /var/log/keystone\']\n image: 192.168.24.1:8787/rhosp13/openstack-keystone:2018-07-13.1\n start_order: 1\n user: root\n volumes: [\'/var/log/containers/keystone:/var/log/keystone\', \'/var/log/containers/httpd/keystone:/var/log/httpd\']\n mysql_init_bundle:\n command: [/docker_puppet_apply.sh, \'2\', \'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user\',\n \'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle\',\n --debug]\n detach: false\n environment: [TRIPLEO_DEPLOY_IDENTIFIER=1532514654]\n image: 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro\',\n \'/etc/puppet:/tmp/puppet-etc:ro\', \'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\',\n \'/var/lib/mysql:/var/lib/mysql:rw\']\n mysql_restart_bundle:\n command: [/usr/bin/bootstrap_host_exec, mysql, if /usr/sbin/pcs resource\n show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle;\n echo "galera-bundle restart invoked"; fi]\n config_volume: mysql\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\n net: host\n start_order: 0\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\',\n \'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro\']\n neutron_init_logs:\n command: [/bin/bash, -c, \'chown -R neutron:neutron /var/log/neutron\']\n image: 192.168.24.1:8787/rhosp13/openstack-neutron-server-opendaylight:2018-07-13.1\n privileged: false\n user: root\n volumes: [\'/var/log/containers/neutron:/var/log/neutron\', \'/var/log/containers/httpd/neutron-api:/var/log/httpd\']\n nova_api_init_logs:\n command: [/bin/bash, -c, \'chown -R nova:nova /var/log/nova\']\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n privileged: false\n user: root\n volumes: [\'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-api:/var/log/httpd\']\n nova_metadata_init_log:\n command: [/bin/bash, -c, \'chown -R nova:nova /var/log/nova\']\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n privileged: false\n user: root\n volumes: [\'/var/log/containers/nova:/var/log/nova\']\n nova_placement_init_log:\n command: [/bin/bash, -c, \'chown -R nova:nova /var/log/nova\']\n image: 192.168.24.1:8787/rhosp13/openstack-nova-placement-api:2018-07-13.1\n start_order: 1\n user: root\n volumes: [\'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-placement:/var/log/httpd\']\n panko_init_log:\n command: [/bin/bash, -c, \'chown -R panko:panko /var/log/panko\']\n image: 192.168.24.1:8787/rhosp13/openstack-panko-api:2018-07-13.1\n user: root\n volumes: [\'/var/log/containers/panko:/var/log/panko\', \'/var/log/containers/httpd/panko-api:/var/log/httpd\']\n rabbitmq_init_bundle:\n command: [/docker_puppet_apply.sh, \'2\', \'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready\',\n \'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle\',\n --debug]\n detach: false\n environment: [TRIPLEO_DEPLOY_IDENTIFIER=1532514654]\n image: 192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro\',\n \'/etc/puppet:/tmp/puppet-etc:ro\', \'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\',\n \'/bin/true:/bin/epmd\']\n rabbitmq_restart_bundle:\n command: [/usr/bin/bootstrap_host_exec, rabbitmq, if /usr/sbin/pcs resource\n show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600\n rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi]\n config_volume: rabbitmq\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-07-13.1\n net: host\n start_order: 0\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\',\n \'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro\']\n redis_init_bundle:\n command: [/docker_puppet_apply.sh, \'2\', \'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation\',\n \'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle\',\n --debug]\n config_volume: redis_init_bundle\n detach: false\n environment: [TRIPLEO_DEPLOY_IDENTIFIER=1532514654]\n image: 192.168.24.1:8787/rhosp13/openstack-redis:2018-07-13.1\n net: host\n start_order: 2\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro\',\n \'/etc/puppet:/tmp/puppet-etc:ro\', \'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\']\n redis_restart_bundle:\n command: [/usr/bin/bootstrap_host_exec, redis, if /usr/sbin/pcs resource\n show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle;\n echo "redis-bundle restart invoked"; fi]\n config_volume: redis\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-redis:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\',\n \'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro\']\n step_3:\n aodh_db_sync:\n command: /usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-07-13.1\n net: host\n privileged: false\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro\', \'/var/log/containers/aodh:/var/log/aodh\',\n \'/var/log/containers/httpd/aodh-api:/var/log/httpd\']\n ceilometer_init_log:\n command: [/bin/bash, -c, \'chown -R ceilometer:ceilometer /var/log/ceilometer\']\n image: 192.168.24.1:8787/rhosp13/openstack-ceilometer-notification:2018-07-13.1\n start_order: 0\n user: root\n volumes: [\'/var/log/containers/ceilometer:/var/log/ceilometer\']\n cinder_api_db_sync:\n command: [/usr/bin/bootstrap_host_exec, cinder_api, su cinder -s /bin/bash\n -c \'cinder-manage db sync --bump-versions\']\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-07-13.1\n net: host\n privileged: false\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro\', \'/var/log/containers/cinder:/var/log/cinder\',\n \'/var/log/containers/httpd/cinder-api:/var/log/httpd\']\n cinder_volume_init_logs:\n command: [/bin/bash, -c, \'chown -R cinder:cinder /var/log/cinder\']\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-07-13.1\n privileged: false\n start_order: 0\n user: root\n volumes: [\'/var/log/containers/cinder:/var/log/cinder\']\n glance_api_db_sync:\n command: /usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash\n -c \'/usr/local/bin/kolla_start\'\n detach: false\n environment: [KOLLA_BOOTSTRAP=True, KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-glance-api:2018-07-13.1\n net: host\n privileged: false\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/glance:/var/log/glance\', \'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json\',\n \'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro\',\n \'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro\', \'\', \'\']\n heat_engine_db_sync:\n command: /usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c\n \'heat-manage db_sync\'\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-heat-engine:2018-07-13.1\n net: host\n privileged: false\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/heat:/var/log/heat\', \'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro\']\n horizon:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS, ENABLE_IRONIC=yes, ENABLE_MANILA=yes,\n ENABLE_MISTRAL=yes, ENABLE_OCTAVIA=yes, ENABLE_SAHARA=yes, ENABLE_CLOUDKITTY=no,\n ENABLE_FREEZER=no, ENABLE_FWAAS=no, ENABLE_KARBOR=no, ENABLE_DESIGNATE=no,\n ENABLE_MAGNUM=no, ENABLE_MURANO=no, ENABLE_NEUTRON_LBAAS=no, ENABLE_SEARCHLIGHT=no,\n ENABLE_SENLIN=no, ENABLE_SOLUM=no, ENABLE_TACKER=no, ENABLE_TROVE=no,\n ENABLE_WATCHER=no, ENABLE_ZAQAR=no, ENABLE_ZUN=no]\n image: 192.168.24.1:8787/rhosp13/openstack-horizon:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/horizon:/var/log/horizon\', \'/var/log/containers/httpd/horizon:/var/log/httpd\',\n \'\', \'\']\n iscsid:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-iscsid:2018-07-13.1\n net: host\n privileged: true\n restart: always\n start_order: 2\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/dev/:/dev/\', \'/run/:/run/\', \'/sys:/sys\', \'/lib/modules:/lib/modules:ro\',\n \'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro\']\n keystone:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-keystone:2018-07-13.1\n net: host\n privileged: false\n restart: always\n start_order: 2\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/keystone:/var/log/keystone\', \'/var/log/containers/httpd/keystone:/var/log/httpd\',\n \'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro\',\n \'\', \'\']\n keystone_bootstrap:\n action: exec\n command: [keystone, /usr/bin/bootstrap_host_exec, keystone, keystone-manage,\n bootstrap, --bootstrap-password, XxK3Mh947xh2TVyaJJWb7myna]\n start_order: 3\n user: root\n keystone_cron:\n command: [/bin/bash, -c, /usr/local/bin/kolla_set_configs && /usr/sbin/crond\n -n]\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-keystone:2018-07-13.1\n net: host\n privileged: false\n restart: always\n start_order: 4\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/keystone:/var/log/keystone\', \'/var/log/containers/httpd/keystone:/var/log/httpd\',\n \'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro\']\n keystone_db_sync:\n command: [/usr/bin/bootstrap_host_exec, keystone, /usr/local/bin/kolla_start]\n detach: false\n environment: [KOLLA_BOOTSTRAP=True, KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-keystone:2018-07-13.1\n net: host\n privileged: false\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/keystone:/var/log/keystone\', \'/var/log/containers/httpd/keystone:/var/log/httpd\',\n \'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro\',\n \'\', \'\']\n neutron_db_sync:\n command: [/usr/bin/bootstrap_host_exec, neutron_api, neutron-db-manage,\n upgrade, heads]\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-neutron-server-opendaylight:2018-07-13.1\n net: host\n privileged: false\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/neutron:/var/log/neutron\', \'/var/log/containers/httpd/neutron-api:/var/log/httpd\',\n \'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro\', \'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro\']\n nova_api_db_sync:\n command: /usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c \'/usr/bin/nova-manage\n api_db sync\'\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n net: host\n start_order: 0\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-api:/var/log/httpd\',\n \'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro\']\n nova_api_ensure_default_cell:\n command: /usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n net: host\n start_order: 2\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-api:/var/log/httpd\',\n \'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro\', \'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro\', \'/var/log/containers/nova:/var/log/nova\',\n \'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro\']\n nova_api_map_cell0:\n command: /usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c \'/usr/bin/nova-manage\n cell_v2 map_cell0\'\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-api:/var/log/httpd\',\n \'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro\']\n nova_db_sync:\n command: /usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c \'/usr/bin/nova-manage\n db sync\'\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n net: host\n start_order: 3\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-api:/var/log/httpd\',\n \'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro\']\n nova_placement:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-nova-placement-api:2018-07-13.1\n net: host\n restart: always\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-placement:/var/log/httpd\',\n \'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro\',\n \'\', \'\']\n panko_db_sync:\n command: /usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c\n \'/usr/bin/panko-dbsync \'\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-panko-api:2018-07-13.1\n net: host\n privileged: false\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/panko:/var/log/panko\', \'/var/log/containers/httpd/panko-api:/var/log/httpd\',\n \'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/panko/etc/panko:/etc/panko:ro\']\n swift_copy_rings:\n command: [/bin/bash, -c, cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz\n /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups]\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-07-13.1\n user: root\n volumes: [\'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw\',\n \'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro\']\n swift_setup_srv:\n command: [chown, -R, \'swift:\', /srv/node]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-account:2018-07-13.1\n user: root\n volumes: [\'/srv/node:/srv/node\']\n step_4:\n aodh_api:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/aodh:/var/log/aodh\', \'/var/log/containers/httpd/aodh-api:/var/log/httpd\',\n \'\', \'\']\n aodh_evaluator:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-aodh-evaluator:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/aodh:/var/log/aodh\']\n aodh_listener:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-aodh-listener:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/aodh:/var/log/aodh\']\n aodh_notifier:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-aodh-notifier:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/aodh:/var/log/aodh\']\n ceilometer_agent_central:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/ceilometer:/var/log/ceilometer\']\n ceilometer_agent_notification:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-ceilometer-notification:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro\',\n \'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro\',\n \'/var/log/containers/ceilometer:/var/log/ceilometer\']\n cinder_api:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-07-13.1\n net: host\n privileged: false\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/cinder:/var/log/cinder\', \'/var/log/containers/httpd/cinder-api:/var/log/httpd\',\n \'\', \'\']\n cinder_api_cron:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-07-13.1\n net: host\n privileged: false\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/cinder:/var/log/cinder\', \'/var/log/containers/httpd/cinder-api:/var/log/httpd\']\n cinder_scheduler:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-scheduler:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/cinder:/var/log/cinder\']\n glance_api:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-glance-api:2018-07-13.1\n net: host\n privileged: false\n restart: always\n start_order: 2\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/glance:/var/log/glance\', \'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json\',\n \'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro\',\n \'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro\', \'\', \'\']\n gnocchi_db_sync:\n detach: false\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-07-13.1\n net: host\n privileged: false\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro\',\n \'/var/lib/gnocchi:/var/lib/gnocchi:rw\', \'/var/log/containers/gnocchi:/var/log/gnocchi\',\n \'/var/log/containers/httpd/gnocchi-api:/var/log/httpd\', \'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro\']\n heat_api:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-heat-api:2018-07-13.1\n net: host\n privileged: false\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/heat:/var/log/heat\', \'/var/log/containers/httpd/heat-api:/var/log/httpd\',\n \'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro\',\n \'\', \'\']\n heat_api_cfn:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-heat-api-cfn:2018-07-13.1\n net: host\n privileged: false\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/heat:/var/log/heat\', \'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd\',\n \'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro\',\n \'\', \'\']\n heat_api_cron:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-heat-api:2018-07-13.1\n net: host\n privileged: false\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/heat:/var/log/heat\', \'/var/log/containers/httpd/heat-api:/var/log/httpd\',\n \'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro\']\n heat_engine:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-heat-engine:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/heat:/var/log/heat\', \'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro\']\n keystone_refresh:\n action: exec\n command: [keystone, pkill, --signal, USR1, httpd]\n start_order: 1\n user: root\n logrotate_crond:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-cron:2018-07-13.1\n net: none\n pid: host\n privileged: true\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers:/var/log/containers\']\n neutron_api:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-neutron-server-opendaylight:2018-07-13.1\n net: host\n privileged: false\n restart: always\n start_order: 0\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/neutron:/var/log/neutron\', \'/var/log/containers/httpd/neutron-api:/var/log/httpd\',\n \'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro\']\n neutron_dhcp:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-neutron-dhcp-agent:2018-07-13.1\n net: host\n pid: host\n privileged: true\n restart: always\n start_order: 10\n ulimit: [nofile=1024]\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/neutron:/var/log/neutron\', \'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro\',\n \'/lib/modules:/lib/modules:ro\', \'/run/openvswitch:/run/openvswitch\', \'/var/lib/neutron:/var/lib/neutron\',\n \'/run/netns:/run/netns:shared\', \'/var/lib/openstack:/var/lib/openstack\',\n \'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro\', \'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro\']\n neutron_metadata_agent:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-neutron-metadata-agent:2018-07-13.1\n net: host\n pid: host\n privileged: true\n restart: always\n start_order: 10\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/neutron:/var/log/neutron\', \'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro\',\n \'/lib/modules:/lib/modules:ro\', \'/var/lib/neutron:/var/lib/neutron\']\n nova_api:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n net: host\n privileged: true\n restart: always\n start_order: 2\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-api:/var/log/httpd\',\n \'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro\',\n \'\', \'\']\n nova_api_cron:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n net: host\n privileged: false\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-api:/var/log/httpd\',\n \'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro\']\n nova_conductor:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-nova-conductor:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro\']\n nova_consoleauth:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-nova-consoleauth:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro\']\n nova_metadata:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n net: host\n privileged: true\n restart: always\n start_order: 2\n user: nova\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro\']\n nova_scheduler:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-nova-scheduler:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro\',\n \'/run:/run\']\n nova_vnc_proxy:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-nova-novncproxy:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro\']\n panko_api:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-panko-api:2018-07-13.1\n net: host\n privileged: false\n restart: always\n start_order: 2\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/panko:/var/log/panko\', \'/var/log/containers/httpd/panko-api:/var/log/httpd\',\n \'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro\',\n \'\', \'\']\n swift_account_auditor:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-account:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_account_reaper:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-account:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_account_replicator:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-account:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_account_server:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-swift-account:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_container_auditor:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-container:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_container_replicator:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-container:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_container_server:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-swift-container:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_container_updater:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-container:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_object_auditor:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-object:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_object_expirer:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_object_replicator:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-object:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_object_server:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-swift-object:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_object_updater:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-object:2018-07-13.1\n net: host\n restart: always\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\', \'/var/cache/swift:/var/cache/swift\']\n swift_proxy:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-07-13.1\n net: host\n restart: always\n start_order: 2\n user: swift\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/run:/run\', \'/srv/node:/srv/node\', \'/dev:/dev\']\n swift_rsync:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-swift-object:2018-07-13.1\n net: host\n privileged: true\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro\',\n \'/srv/node:/srv/node\', \'/dev:/dev\']\n step_5:\n ceilometer_gnocchi_upgrade:\n command: [/usr/bin/bootstrap_host_exec, ceilometer_agent_central, \'su ceilometer\n -s /bin/bash -c \'\'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database\n && exit 0 || sleep 5; done; exit 1\'\'\']\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-07-13.1\n net: host\n privileged: false\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro\',\n \'/var/log/containers/ceilometer:/var/log/ceilometer\']\n cinder_volume_init_bundle:\n command: [/docker_puppet_apply.sh, \'5\', \'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location\',\n \'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle\',\n --debug --verbose]\n detach: false\n environment: [TRIPLEO_DEPLOY_IDENTIFIER=1532514654]\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro\',\n \'/etc/puppet:/tmp/puppet-etc:ro\', \'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\']\n cinder_volume_restart_bundle:\n command: [/usr/bin/bootstrap_host_exec, cinder_volume, if /usr/sbin/pcs\n resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart\n --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart\n invoked"; fi]\n config_volume: cinder\n detach: false\n image: 192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-07-13.1\n net: host\n start_order: 0\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro\', \'/dev/shm:/dev/shm:rw\',\n \'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro\']\n gnocchi_api:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/gnocchi:/var/lib/gnocchi:rw\', \'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/gnocchi:/var/log/gnocchi\', \'/var/log/containers/httpd/gnocchi-api:/var/log/httpd\',\n \'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro\', \'\', \'\']\n gnocchi_metricd:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-gnocchi-metricd:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro\',\n \'/var/lib/gnocchi:/var/lib/gnocchi:rw\', \'/var/log/containers/gnocchi:/var/log/gnocchi\',\n \'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro\']\n gnocchi_statsd:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-gnocchi-statsd:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers/gnocchi:/var/log/gnocchi\', \'/var/lib/gnocchi:/var/lib/gnocchi:rw\',\n \'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro\']\n nova_api_discover_hosts:\n command: /usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh\n detach: false\n environment: [TRIPLEO_DEPLOY_IDENTIFIER=1532514654]\n image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\n net: host\n start_order: 1\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/log/containers/httpd/nova-api:/var/log/httpd\',\n \'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro\', \'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\',\n \'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro\', \'/var/log/containers/nova:/var/log/nova\',\n \'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro\']\n role_data_docker_config_scripts:\n create_swift_secret.sh: {content: "#!/bin/bash\\nexport OS_PROJECT_DOMAIN_ID=$(crudini\\\n \\ --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\\nexport\\\n \\ OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster\\\n \\ user_domain_id)\\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf\\\n \\ kms_keymaster project_name)\\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf\\\n \\ kms_keymaster username)\\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf\\\n \\ kms_keymaster password)\\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf\\\n \\ kms_keymaster auth_endpoint)\\nexport OS_AUTH_TYPE=password\\nexport OS_IDENTITY_API_VERSION=3\\n\\\n \\necho \\"Check if secret already exists\\"\\nsecret_href=$(openstack secret\\\n \\ list --name swift_root_secret_uuid)\\nrc=$?\\nif [[ $rc != 0 ]]; then\\n\\\n \\ echo \\"Failed to check secrets, check if Barbican in enabled and responding\\\n \\ properly\\"\\n exit $rc;\\nfi\\nif [ -z \\"$secret_href\\" ]; then\\n echo\\\n \\ \\"Create new secret\\"\\n order_href=$(openstack secret order create --name\\\n \\ swift_root_secret_uuid --payload-content-type=\\"application/octet-stream\\"\\\n \\ --algorithm aes --bit-length 256 --mode ctr key -f value -c \\"Order href\\"\\\n )\\nfi\\n", mode: \'0700\'}\n docker_puppet_apply.sh: {content: "#!/bin/bash\\nset -eux\\nSTEP=$1\\nTAGS=$2\\n\\\n CONFIG=$3\\nEXTRA_ARGS=${4:-\'\'}\\nif [ -d /tmp/puppet-etc ]; then\\n # ignore\\\n \\ copy failures as these may be the same file depending on docker mounts\\n\\\n \\ cp -a /tmp/puppet-etc/* /etc/puppet || true\\nfi\\necho \\"{\\\\\\"step\\\\\\"\\\n : ${STEP}}\\" > /etc/puppet/hieradata/docker.json\\nexport FACTER_uuid=docker\\n\\\n set +e\\npuppet apply $EXTRA_ARGS \\\\\\n --verbose \\\\\\n --detailed-exitcodes\\\n \\ \\\\\\n --summarize \\\\\\n --color=false \\\\\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules\\\n \\ \\\\\\n --tags $TAGS \\\\\\n -e \\"${CONFIG}\\"\\nrc=$?\\nset -e\\nset +ux\\n\\\n if [ $rc -eq 2 -o $rc -eq 0 ]; then\\n exit 0\\nfi\\nexit $rc\\n", mode: \'0700\'}\n nova_api_discover_hosts.sh: {content: "#!/bin/bash\\nexport OS_PROJECT_DOMAIN_NAME=$(crudini\\\n \\ --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\\nexport\\\n \\ OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken\\\n \\ user_domain_name)\\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf\\\n \\ keystone_authtoken project_name)\\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf\\\n \\ keystone_authtoken username)\\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf\\\n \\ keystone_authtoken password)\\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf\\\n \\ keystone_authtoken auth_url)\\nexport OS_AUTH_TYPE=password\\nexport OS_IDENTITY_API_VERSION=3\\n\\\n \\necho \\"(cellv2) Running cell_v2 host discovery\\"\\ntimeout=600\\nloop_wait=30\\n\\\n declare -A discoverable_hosts\\nfor host in $(hiera -c /etc/puppet/hiera.yaml\\\n \\ cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr \\",\\" \\" \\"); do discoverable_hosts[$host]=1;\\\n \\ done\\ntimeout_at=$(( $(date +\\"%s\\") + ${timeout} ))\\necho \\"(cellv2)\\\n \\ Waiting ${timeout} seconds for hosts to register\\"\\nfinished=0\\nwhile\\\n \\ : ; do\\n for host in $(openstack -q compute service list -c \'Host\' -c\\\n \\ \'Zone\' -f value | awk \'$2 != \\"internal\\" { print $1 }\'); do\\n if ((\\\n \\ discoverable_hosts[$host] == 1 )); then\\n echo \\"(cellv2) compute\\\n \\ node $host has registered\\"\\n unset discoverable_hosts[$host]\\n \\\n \\ fi\\n done\\n finished=1\\n for host in \\"${!discoverable_hosts[@]}\\"\\\n ; do\\n if (( ${discoverable_hosts[$host]} == 1 )); then\\n echo \\"\\\n (cellv2) compute node $host has not registered\\"\\n finished=0\\n \\\n \\ fi\\n done\\n remaining=$(( $timeout_at - $(date +\\"%s\\") ))\\n if ((\\\n \\ $finished == 1 )); then\\n echo \\"(cellv2) All nodes registered\\"\\n\\\n \\ break\\n elif (( $remaining <= 0 )); then\\n echo \\"(cellv2) WARNING:\\\n \\ timeout waiting for nodes to register, running host discovery regardless\\"\\\n \\n echo \\"(cellv2) Expected host list:\\" $(hiera -c /etc/puppet/hiera.yaml\\\n \\ cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\\n\\\n \\ echo \\"(cellv2) Detected host list:\\" $(openstack -q compute service\\\n \\ list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != \\"internal\\" { print $1\\\n \\ }\' | sort -u | tr \'\\\\n\', \' \')\\n break\\n else\\n echo \\"(cellv2)\\\n \\ Waiting ${remaining} seconds for hosts to register\\"\\n sleep $loop_wait\\n\\\n \\ fi\\ndone\\necho \\"(cellv2) Running host discovery...\\"\\nsu nova -s /bin/bash\\\n \\ -c \\"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\\"\\\n \\n", mode: \'0700\'}\n nova_api_ensure_default_cell.sh: {content: "#!/bin/bash\\nDEFID=$(nova-manage\\\n \\ cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == \\"\\\n default\\" {print $4}\')\\nif [ \\"$DEFID\\" ]; then\\n echo \\"(cellv2) Updating\\\n \\ default cell_v2 cell $DEFID\\"\\n su nova -s /bin/bash -c \\"/usr/bin/nova-manage\\\n \\ cell_v2 update_cell --cell_uuid $DEFID --name=default\\"\\nelse\\n echo\\\n \\ \\"(cellv2) Creating default cell_v2 cell\\"\\n su nova -s /bin/bash -c\\\n \\ \\"/usr/bin/nova-manage cell_v2 create_cell --name=default\\"\\nfi\\n", mode: \'0700\'}\n set_swift_keymaster_key_id.sh: {content: "#!/bin/bash\\nexport OS_PROJECT_DOMAIN_ID=$(crudini\\\n \\ --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\\nexport\\\n \\ OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster\\\n \\ user_domain_id)\\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf\\\n \\ kms_keymaster project_name)\\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf\\\n \\ kms_keymaster username)\\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf\\\n \\ kms_keymaster password)\\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf\\\n \\ kms_keymaster auth_endpoint)\\nexport OS_AUTH_TYPE=password\\nexport OS_IDENTITY_API_VERSION=3\\n\\\n echo \\"retrieve key_id\\"\\nloop_wait=2\\nfor i in {0..5}; do\\n #TODO update\\\n \\ uuid from mistral here too\\n secret_href=$(openstack secret list --name\\\n \\ swift_root_secret_uuid)\\n if [ \\"$secret_href\\" ]; then\\n echo \\"\\\n set key_id in keymaster.conf\\"\\n secret_href=$(openstack secret list\\\n \\ --name swift_root_secret_uuid -f value -c \\"Secret href\\")\\n crudini\\\n \\ --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\\n\\\n \\ exit 0\\n else\\n echo \\"no key, wait for $loop_wait and check again\\"\\\n \\n sleep $loop_wait\\n ((loop_wait++))\\n fi\\ndone\\necho \\"Failed to\\\n \\ set secret in keymaster.conf, check if Barbican is enabled and responding\\\n \\ properly\\"\\nexit 1\\n", mode: \'0700\'}\n role_data_docker_puppet_tasks:\n step_3:\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-keystone:2018-07-13.1\',\n config_volume: keystone_init_tasks, puppet_tags: \'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain\',\n step_config: \'include ::tripleo::profile::base::keystone\'}\n role_data_external_deploy_tasks: []\n role_data_external_post_deploy_tasks: []\n role_data_fast_forward_post_upgrade_tasks:\n - name: Register repo type and args\n set_fact:\n fast_forward_repo_args:\n tripleo_repos: {ocata: -b ocata current, pike: -b pike current, queens: -b\n queens current}\n fast_forward_repo_type: custom-script\n - debug: {msg: \'fast_forward_repo_type: {{ fast_forward_repo_type }} fast_forward_repo_args:\n {{ fast_forward_repo_args }}\'}\n - block:\n - git: {dest: /home/stack/tripleo-repos/, repo: \'https://github.com/openstack/tripleo-repos.git\'}\n name: clone tripleo-repos\n - args: {chdir: /home/stack/tripleo-repos/}\n command: python setup.py install\n name: install tripleo-repos\n - {command: \'tripleo-repos {{ fast_forward_repo_args.tripleo_repos[release]\n }}\', name: Enable tripleo-repos}\n when: [ffu_packages_apply|bool, fast_forward_repo_type == \'tripleo-repos\']\n - block:\n - copy: {content: "#!/bin/bash\\nset -e\\necho \\"If you use FastForwardRepoType\\\n \\ \'custom-script\' you have to provide the upgrade repo script content.\\"\\\n \\necho \\"It will be installed as /root/ffu_upgrade_repo.sh on the node\\"\\\n \\necho \\"and passed the upstream name (ocata, pike, queens) of the release\\\n \\ as first argument\\"\\ncase $1 in\\n ocata)\\n subscription-manager\\\n \\ repos --disable=rhel-7-server-openstack-10-rpms\\n subscription-manager\\\n \\ repos --enable=rhel-7-server-openstack-11-rpms\\n ;;\\n pike)\\n \\\n \\ subscription-manager repos --disable=rhel-7-server-openstack-11-rpms\\n\\\n \\ subscription-manager repos --enable=rhel-7-server-openstack-12-rpms\\n\\\n \\ ;;\\n queens)\\n subscription-manager repos --disable=rhel-7-server-openstack-12-rpms\\n\\\n \\ subscription-manager repos --enable=rhel-7-server-openstack-13-rpms\\n\\\n \\ ;;\\n *)\\n echo \\"unknown release $1\\" >&2\\n exit 1\\nesac\\n",\n dest: /root/ffu_update_repo.sh, mode: 448}\n name: Create custom Script for upgrading repo.\n - {name: Execute custom script for upgrading repo., shell: \'/root/ffu_update_repo.sh\n {{release}}\'}\n when: [ffu_packages_apply|bool, fast_forward_repo_type == \'custom-script\']\n role_data_fast_forward_upgrade_tasks:\n - ignore_errors: true\n name: Check for aodh running under apache\n register: aodh_httpd_enabled_result\n shell: httpd -t -D DUMP_VHOSTS | grep -q aodh_wsgi\n tags: common\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact aodh_httpd_enabled\n set_fact: {aodh_httpd_enabled: \'{{ aodh_httpd_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - command: systemctl is-active --quiet httpd\n ignore_errors: true\n name: Check if httpd is running\n register: httpd_running_result\n when: [step|int == 0, release == \'ocata\', httpd_running is undefined]\n - name: Set fact httpd_running if undefined\n set_fact: {httpd_running: \'{{ httpd_running_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\', httpd_running is undefined]\n - name: Stop and disable aodh (under httpd)\n service: name=httpd state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', aodh_httpd_enabled|bool, httpd_running|bool]\n - name: Aodh package update\n shell: yum -y update openstack-aodh*\n when: [step|int == 6, is_bootstrap_node|bool, aodh_httpd_enabled|bool]\n - command: aodh-dbsync\n name: aodh db sync\n when: [step|int == 8, is_bootstrap_node|bool, aodh_httpd_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-aodh-evaluator\n ignore_errors: true\n name: FFU check if openstack-aodh-evaluator is deployed\n register: aodh_evaluator_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact aodh_evaluator_enabled\n set_fact: {aodh_evaluator_enabled: \'{{ aodh_evaluator_enabled_result.rc == 0\n }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-aodh-evaluator service\n service: name=openstack-aodh-evaluator state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', aodh_evaluator_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-aodh-listener\n ignore_errors: true\n name: FFU check if openstack-aodh-listener is deployed\n register: aodh_listener_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact aodh_listener_enabled\n set_fact: {aodh_listener_enabled: \'{{ aodh_listener_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-aodh-listener service\n service: name=openstack-aodh-listener state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', aodh_listener_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-aodh-notifier\n ignore_errors: true\n name: FFU check if openstack-aodh-notifier is deployed\n register: aodh_notifier_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact aodh_notifier_enabled\n set_fact: {aodh_notifier_enabled: \'{{ aodh_notifier_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-aodh-notifier service\n service: name=openstack-aodh-notifier state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', aodh_notifier_enabled|bool]\n - file: path=/etc/httpd/conf.d/10-ceilometer_wsgi.conf state=absent\n name: Purge Ceilometer apache config files\n when: [step|int == 1, release == \'ocata\']\n - lineinfile: dest=/etc/httpd/conf/ports.conf state=absent regexp="8777$"\n name: Clean up ceilometer port from ports.conf\n when: [step|int == 1, release == \'ocata\']\n - command: systemctl is-enabled --quiet openstack-ceilometer-collector\n ignore_errors: true\n name: FFU check if openstack-ceilometer-collector is deployed\n register: ceilometer_agent_collector_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact ceilometer_agent_collector_enabled\n set_fact: {ceilometer_agent_collector_enabled: \'{{ ceilometer_agent_collector_enabled_result.rc\n == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable ceilometer_collector service on upgrade\n service: name=openstack-ceilometer-collector state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', ceilometer_agent_collector_enabled|bool]\n - changed_when: [step|int == 1, release == \'ocata\', remove_ceilometer_expirer_crontab.stderr\n != "no crontab for ceilometer"]\n failed_when: [step|int == 1, release == \'ocata\', remove_ceilometer_expirer_crontab.rc\n != 0, remove_ceilometer_expirer_crontab.stderr != "no crontab for ceilometer"]\n name: Remove ceilometer expirer cron tab on upgrade\n register: remove_ceilometer_expirer_crontab\n shell: /usr/bin/crontab -u ceilometer -r\n - command: systemctl is-enabled --quiet openstack-ceilometer-central\n ignore_errors: true\n name: FFU check if openstack-ceilometer-central is deployed\n register: ceilometer_agent_central_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact ceilometer_agent_central_enabled\n set_fact: {ceilometer_agent_central_enabled: \'{{ ceilometer_agent_central_enabled_result.rc\n == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-ceilometer-central service\n service: name=openstack-ceilometer-central state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', ceilometer_agent_central_enabled|bool]\n - command: systemctl is-enabled openstack-ceilometer-notification\n ignore_errors: true\n name: FFU check if openstack-ceilometer-notification is deployed\n register: ceilometer_agent_notification_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact ceilometer_agent_notification_enabled\n set_fact: {ceilometer_agent_notification_enabled: \'{{ ceilometer_agent_notification_enabled_result.rc\n == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and diable openstack-ceilometer-notification service\n service: name=openstack-ceilometer-notification state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', ceilometer_agent_notification_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-cinder-api\n ignore_errors: true\n name: Check is cinder_api is deployed\n register: cinder_api_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact cinder_api_enabled\n set_fact: {cinder_api_enabled: \'{{ cinder_api_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop openstack-cinder-api\n service: name=openstack-cinder-api state=stopped\n when: [step|int == 1, release == \'ocata\', cinder_api_enabled|bool]\n - name: Extra removal of services for cinder\n shell: \'cinder-manage service list |\\\n\n grep -v Binary | tr \'\'@\'\' \'\' \'\' |\\\n\n awk \'\'{print $1 " " $2}\'\' |\\\n\n while read i ; do cinder-manage service remove $i ; done\n\n \'\n when: [step|int == 5, release == \'pike\', is_bootstrap_node|bool]\n - command: cinder-manage db online_data_migrations\n name: Extra migration for cinder\n when: [step|int == 5, release == \'pike\', is_bootstrap_node|bool]\n - name: Cinder package update\n shell: yum -y update openstack-cinder*\n when: [step|int == 6, is_bootstrap_node|bool]\n - command: cinder-manage db sync\n name: Cinder db sync\n when: [step|int == 8, is_bootstrap_node|bool]\n - command: systemctl is-enabled --quiet openstack-cinder-scheduler\n ignore_errors: true\n name: Check if cinder_scheduler is deployed\n register: cinder_scheduler_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact cinder_scheduler_enabled\n set_fact: {cinder_scheduler_enabled: \'{{ cinder_scheduler_enabled_result.rc\n == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop openstack-cinder-scheduler\n service: name=openstack-cinder-scheduler state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', cinder_scheduler_enabled|bool]\n - ignore_errors: true\n name: Check cluster resource status\n pacemaker_resource: {check_mode: false, resource: openstack-cinder-volume, state: show}\n register: cinder_volume_res_result\n when: [step|int == 0, release == \'ocata\', is_bootstrap_node|bool]\n - name: Set fact cinder_volume_res\n set_fact: {cinder_volume_res: \'{{ cinder_volume_res_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\', is_bootstrap_node|bool]\n - name: Disable the openstack-cinder-volume cluster resource\n pacemaker_resource: {resource: openstack-cinder-volume, state: disable, wait_for_resource: true}\n register: cinder_volume_output\n retries: 5\n until: cinder_volume_output.rc == 0\n when: [step|int == 2, release == \'ocata\', is_bootstrap_node|bool, cinder_volume_res|bool]\n - command: systemctl is-enabled --quiet openstack-glance-api\n ignore_errors: true\n name: Check if glance_api is deployed\n register: glance_api_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact glance_api_enabled\n set_fact: {glance_api_enabled: \'{{ glance_api_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop openstack-glance-api\n service: name=openstack-glance-api state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', glance_api_enabled|bool]\n - name: glance package update\n when: [step|int == 6, is_bootstrap_node|bool]\n yum: name=openstack-glance state=latest\n - command: glance-manage db_sync\n name: glance db sync\n when: [step|int == 8, is_bootstrap_node|bool]\n - command: systemctl is-enabled --quiet openstack-glance-registry\n ignore_errors: true\n name: Check if glance_registry is deployed\n register: glance_registry_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact glance_registry_enabled\n set_fact: {glance_registry_enabled: \'{{ glance_registry_enabled_result.rc ==\n 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop openstack-glance-registry\n service: name=openstack-glance-registry state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', glance_registry_enabled|bool]\n - command: systemctl is-active --quiet httpd\n ignore_errors: true\n name: Check if httpd service is running\n register: httpd_running_result\n tags: common\n when: [step|int == 0, release == \'ocata\', httpd_running is undefined]\n - name: Set fact httpd_running if unset\n set_fact: {httpd_running: \'{{ httpd_running_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\', httpd_running is undefined]\n - command: systemctl is-enabled --quiet openstack-gnocchi-api\n ignore_errors: true\n name: Check if gnocchi_api is deployed\n register: gnocchi_api_enabled_result\n tags: common\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact gnocchi_api_enabled\n set_fact: {gnocchi_api_enabled: \'{{ gnocchi_api_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - ignore_errors: true\n name: Check for gnocchi_api running under apache\n register: gnocchi_httpd_enabled_result\n shell: httpd -t -D DUMP_VHOSTS | grep -q gnocchi\n tags: common\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact gnocchi_httpd_enabled\n set_fact: {gnocchi_httpd_enabled: \'{{ gnocchi_httpd_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable gnocchi_api service\n service: name=openstack-gnocchi-api state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', gnocchi_api_enabled|bool]\n - name: Stop and disable httpd service\n service: name=httpd state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', gnocchi_httpd_enabled|bool, httpd_running|bool]\n - name: Update gnocchi packages\n when: [step|int == 6, is_bootstrap_node|bool]\n with_items: [openstack-gnocchi*, numpy]\n yum: name={{ item }} state=latest\n - command: gnocchi-upgrade --skip-storage\n name: Sync gnocchi DB\n when: [step|int == 8, is_bootstrap_node|bool]\n - command: systemctl is-enabled --quiet openstack-gnocchi-metricd\n ignore_errors: true\n name: FFU check if openstack-gnocchi-metricd is deployed\n register: gnocchi_metricd_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact gnocchi_metricd_enabled\n set_fact: {gnocchi_metricd_enabled: \'{{ gnocchi_metricd_enabled_result.rc ==\n 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-gnocchi-metricd service\n service: name=openstack-gnocchi-metricd state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', gnocchi_metricd_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-gnocchi-statsd\n ignore_errors: true\n name: FFU check if openstack-gnocchi-statsd is deployed\n register: gnocchi_statsd_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact gnocchi_statsd_enabled\n set_fact: {gnocchi_statsd_enabled: \'{{ gnocchi_statsd_enabled_result.rc == 0\n }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-gnocchi-statsd service\n service: name=openstack-gnocchi-statsd state=stopped enabled=no\n when: [step|int == 2, release == \'ocata\', gnocchi_statsd_enabled|bool]\n - command: systemctl is-enabled openstack-heat-api\n ignore_errors: true\n name: FFU check openstack-heat-api is enabled\n register: heat_api_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact heat_api_enabled\n set_fact: {heat_api_enabled: \'{{ heat_api_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-heat-api\n service: name=openstack-heat-api state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', heat_api_enabled|bool]\n - name: FFU Heat package update\n shell: yum -y update openstack-heat*\n when: [step|int == 6, is_bootstrap_node|bool]\n - command: heat-manage db_sync\n name: FFU Heat db-sync\n when: [step|int == 8, is_bootstrap_node|bool]\n - command: systemctl is-enabled openstack-heat-api-cloudwatch\n ignore_errors: true\n name: FFU check if heat_api_cloudwatch is deployed\n register: heat_api_cloudwatch_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact heat_api_cloudwatch_enabled\n set_fact: {heat_api_cloudwatch_enabled: \'{{ heat_api_cloudwatch_enabled_result.rc\n == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable the heat-api-cloudwatch service.\n service: name=openstack-heat-api-cloudwatch state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', heat_api_cloudwatch_enabled|bool]\n - ignore_errors: true\n name: Remove heat_api_cloudwatch package\n when: [step|int == 2, release == \'ocata\']\n yum: name=openstack-heat-api-cloudwatch state=removed\n - command: systemctl is-enabled openstack-heat-api-cfn\n ignore_errors: true\n name: FFU check if openstack-heat-api-cfn service is enabled\n register: heat_api_cfn_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact heat_api_cfn_enabled\n set_fact: {heat_api_cfn_enabled: \'{{ heat_api_cfn_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-heat-api-cfn service\n service: name=openstack-heat-api-cfn state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', heat_api_cfn_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-heat-engine\n ignore_errors: true\n name: FFU check if openstack-heat-engine is enabled\n register: heat_engine_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact heat_engine_enabled\n set_fact: {heat_engine_enabled: \'{{ heat_engine_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-heat-engine service\n service: name=openstack-heat-engine state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', heat_engine_enabled|bool]\n - ignore_errors: true\n name: Check for keystone running under apache\n register: keystone_httpd_enabled_result\n shell: httpd -t -D DUMP_VHOSTS | grep -q keystone_wsgi\n tags: common\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact keystone_httpd_enabled\n set_fact: {keystone_httpd_enabled: \'{{ keystone_httpd_enabled_result.rc == 0\n }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable keystone (under httpd)\n service: name=httpd state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', keystone_httpd_enabled|bool, httpd_running|bool]\n - name: Keystone package update\n shell: yum -y update openstack-keystone*\n when: [step|int == 6, is_bootstrap_node|bool]\n - command: keystone-manage db_sync\n name: keystone db sync\n when: [step|int == 8, is_bootstrap_node|bool]\n - command: systemctl is-enabled --quiet memcached\n ignore_errors: true\n name: Check if memcached is deployed\n register: memcached_enabled_result\n tags: common\n when: [step|int == 0, release == \'ocata\']\n - name: memcached_enabled\n set_fact: {memcached_enabled: \'{{ memcached_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable memcached service\n service: name=memcached state=stopped enabled=no\n when: [step|int == 2, release == \'ocata\', memcached_enabled|bool]\n - command: systemctl is-enabled --quiet neutron-server\n ignore_errors: true\n name: Check if neutron_server is deployed\n register: neutron_server_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact neutron_server_enabled\n set_fact: {neutron_server_enabled: \'{{ neutron_server_enabled_result.rc == 0\n }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop neutron_server\n service: name=neutron-server state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', neutron_server_enabled|bool]\n - name: Neutron package update\n shell: yum -y update openstack-neutron*\n when: [step|int == 6, is_bootstrap_node|bool]\n - name: Neutron package update workaround\n when: [step|int == 6, is_bootstrap_node|bool]\n yum: name=python-networking-odl state=latest\n - command: neutron-db-manage upgrade head\n name: Neutron db sync\n when: [step|int == 8, is_bootstrap_node|bool]\n - command: systemctl is-enabled --quiet neutron-dhcp-agent\n ignore_errors: true\n name: Check if neutron_dhcp_agent is deployed\n register: neutron_dhcp_agent_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact neutron_dhcp_agent_enabled\n set_fact: {neutron_dhcp_agent_enabled: \'{{ neutron_dhcp_agent_enabled_result.rc\n == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop neutron_dhcp_agent\n service: name=neutron-dhcp-agent state=stopped enabled=no\n when: [step|int == 2, release == \'ocata\', neutron_dhcp_agent_enabled|bool]\n - command: systemctl is-enabled --quiet neutron-metadata-agent\n ignore_errors: true\n name: Check if neutron_metadata_agent is deployed\n register: neutron_metadata_agent_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact neutron_metadata_agent_enabled\n set_fact: {neutron_metadata_agent_enabled: \'{{ neutron_metadata_agent_enabled_result.rc\n == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop neutron_metadata_agent\n service: name=neutron-metadata-agent state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', neutron_metadata_agent_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-nova-api\n ignore_errors: true\n name: Check if nova-api is deployed\n register: nova_api_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact nova_api_enabled\n set_fact: {nova_api_enabled: \'{{ nova_api_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop openstack-nova-api service\n service: name=openstack-nova-api state=stopped\n when: [step|int == 1, nova_api_enabled|bool, release == \'ocata\']\n - command: nova-manage db online_data_migrations\n name: Extra migration for nova tripleo/+bug/1656791\n when: [step|int == 5, release == \'ocata\', is_bootstrap_node|bool]\n - command: yum update -y *nova*\n name: Update nova packages\n when: [step|int == 6, is_bootstrap_node|bool]\n - block:\n - mysql_db: {name: nova_cell0, state: present}\n name: Create cell0 db\n - mysql_user: {host_all: true, name: nova, priv: \'*.*:ALL\', state: present}\n name: Grant access to cell0 db\n - copy: {content: "$transport_url = os_transport_url({\\n \'transport\' => hiera(\'messaging_service_name\',\\\n \\ \'rabbit\'),\\n \'hosts\' => any2array(hiera(\'rabbitmq_node_names\',\\\n \\ undef)),\\n \'port\' => sprintf(\'%s\',hiera(\'nova::rabbit_port\', \'5672\')\\\n \\ ),\\n \'username\' => hiera(\'nova::rabbit_userid\', \'guest\'),\\n \'password\'\\\n \\ => hiera(\'nova::rabbit_password\'),\\n \'ssl\' => sprintf(\'%s\',\\\n \\ bool2num(str2bool(hiera(\'nova::rabbit_use_ssl\', \'0\'))))\\n}) oslo::messaging::default\\\n \\ { \'nova_config\':\\n transport_url => $transport_url\\n}\\n", dest: /root/nova-api_upgrade_manifest.pp,\n mode: 384}\n name: Create puppet manifest to set transport_url in nova.conf\n - {changed_when: puppet_apply_nova_api_upgrade.rc == 2, command: \'puppet apply\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules\n --detailed-exitcodes /root/nova-api_upgrade_manifest.pp\', failed_when: \'puppet_apply_nova_api_upgrade.rc\n not in [0,2]\', name: Run puppet apply to set tranport_url in nova.conf,\n register: puppet_apply_nova_api_upgrade}\n - {name: Setup cell_v2 (map cell0), shell: \'nova-manage cell_v2 map_cell0 --database_connection=mysql+pymysql://nova:6BkAm2KjhQcBQjbmFfcxWwJUq@172.17.1.10/nova_cell0\'}\n - {changed_when: nova_api_create_cell.rc == 0, failed_when: \'nova_api_create_cell.rc\n not in [0,2]\', name: Setup cell_v2 (create default cell), register: nova_api_create_cell,\n shell: \'nova-manage cell_v2 create_cell --name=\'\'default\'\' --database_connection=$(hiera\n nova::database_connection)\'}\n - {async: 300, command: nova-manage db sync, name: Setup cell_v2 (sync nova/cell\n DB), poll: 10}\n - {name: Setup cell_v2 (get cell uuid), register: nova_api_cell_uuid, shell: \'nova-manage\n cell_v2 list_cells | sed -e \'\'1,3d\'\' -e \'\'$d\'\' | awk -F \'\' *| *\'\' \'\'$2 ==\n "default" {print $4}\'\'\'}\n - {command: \'nova-manage cell_v2 discover_hosts --cell_uuid {{nova_api_cell_uuid.stdout}}\n --verbose\', name: Setup cell_v2 (migrate hosts)}\n - {command: \'nova-manage cell_v2 map_instances --cell_uuid {{nova_api_cell_uuid.stdout}}\',\n name: Setup cell_v2 (migrate instances)}\n when: [step|int == 7, release == \'ocata\', is_bootstrap_node|bool]\n - command: nova-manage api_db sync\n name: Sync nova_api DB\n when: [step|int == 8, is_bootstrap_node|bool]\n - command: nova-manage db online_data_migrations\n name: Online data migration for nova\n when: [step|int == 8, is_bootstrap_node|bool]\n - command: systemctl is-enabled --quiet openstack-nova-conductor\n ignore_errors: true\n name: Check if nova_conductor is deployed\n register: nova_conductor_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact nova_conductor_enabled\n set_fact: {nova_conductor_enabled: \'{{ nova_conductor_enabled_result.rc == 0\n }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable nova_conductor service\n service: name=openstack-nova-conductor state=stopped\n when: [step|int == 1, release == \'ocata\', nova_conductor_enabled|bool]\n - command: systemctl is-active --quiet openstack-nova-consoleauth\n ignore_errors: true\n name: Check if nova_consoleauth is deployed\n register: nova_consoleauth_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact nova_consoleauth_enabled\n set_fact: {nova_consoleauth_enabled: \'{{ nova_consoleauth_enabled_result.rc\n == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable nova-consoleauth service\n service: name=openstack-nova-consoleauth state=stopped\n when: [step|int == 1, release == \'ocata\', nova_consoleauth_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-nova-api\n ignore_errors: true\n name: Check if nova_api_metadata is deployed\n register: nova_metadata_enabled_result\n tags: common\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact nova_metadata_enabled\n set_fact: {nova_metadata_enabled: \'{{ nova_metadata_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable nova_api service\n service: name=openstack-nova-api state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', nova_metadata_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-nova-scheduler\n ignore_errors: true\n name: Check if nova_scheduler is deployed\n register: nova_scheduler_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact nova_scheduler_enabled\n set_fact: {nova_scheduler_enabled: \'{{ nova_scheduler_enabled_result.rc == 0\n }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable nova-scheduler service\n service: name=openstack-nova-scheduler state=stopped\n when: [step|int == 1, release == \'ocata\', nova_scheduler_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-nova-novncproxy\n ignore_errors: true\n name: Check if nova vncproxy is deployed\n register: nova_vncproxy_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact nova_vncproxy_enabled\n set_fact: {nova_vncproxy_enabled: \'{{ nova_vncproxy_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable nova-novncproxy service\n service: name=openstack-nova-novncproxy state=stopped\n when: [step|int == 1, release == \'ocata\', nova_vncproxy_enabled|bool]\n - ignore_errors: true\n name: Check cluster resource status of rabbitmq\n pacemaker_resource: {check_mode: false, resource: rabbitmq, state: show}\n register: rabbitmq_res_result\n when: [step|int == 0, release == \'ocata\', is_bootstrap_node|bool]\n - name: Set fact rabbitmq_res\n set_fact: {rabbitmq_res: \'{{ rabbitmq_res_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\', is_bootstrap_node|bool]\n - name: Disable the rabitmq cluster resource\n pacemaker_resource: {resource: rabbitmq, state: disable, wait_for_resource: true}\n register: rabbitmq_output\n retries: 5\n until: rabbitmq_output.rc == 0\n when: [step|int == 2, release == \'ocata\', is_bootstrap_node|bool, rabbitmq_res|bool]\n - ignore_errors: true\n name: Check cluster resource status of redis\n pacemaker_resource: {check_mode: false, resource: redis, state: show}\n register: redis_res_result\n when: [step|int == 0, release == \'ocata\', is_bootstrap_node|bool]\n - name: Set fact redis_res\n set_fact: {redis_res: \'{{ redis_res_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\', is_bootstrap_node|bool]\n - name: Disable the redis cluster resource\n pacemaker_resource: {resource: redis, state: disable, wait_for_resource: true}\n register: redis_output\n retries: 5\n until: redis_output.rc == 0\n when: [step|int == 2, release == \'ocata\', is_bootstrap_node|bool, redis_res|bool]\n - command: systemctl is-enabled --quiet "{{ item }}"\n ignore_errors: true\n name: Check if swift-proxy or swift-object-expirer are deployed\n register: swift_proxy_services_enabled_result\n when: [step|int == 0, release == \'ocata\']\n with_items: [openstack-swift-proxy, openstack-swift-object-expirer]\n - name: Set fact swift_proxy_services_enabled\n set_fact: {swift_proxy_services_enabled: \'{{ swift_proxy_services_enabled_result\n }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop swift-proxy and swift-object-expirer services\n service: name={{ item.item }} state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', item.rc == 0]\n with_items: \'{{ swift_proxy_services_enabled.results }}\'\n - command: systemctl is-enabled --quiet "{{ item }}"\n ignore_errors: true\n name: Check if swift storage services are deployed\n register: swift_services_enabled_result\n when: [step|int == 0, release == \'ocata\']\n with_items: [openstack-swift-account-auditor, openstack-swift-account-reaper,\n openstack-swift-account-replicator, openstack-swift-account, openstack-swift-container-auditor,\n openstack-swift-container-replicator, openstack-swift-container-updater, openstack-swift-container,\n openstack-swift-object-auditor, openstack-swift-object-replicator, openstack-swift-object-updater,\n openstack-swift-object]\n - name: Set fact swift_services_enabled\n set_fact: {swift_services_enabled: \'{{ swift_services_enabled_result }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop swift storage services\n service: name={{ item.item }} state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', item.rc == 0]\n with_items: \'{{ swift_services_enabled.results }}\'\n - name: Register repo type and args\n set_fact:\n fast_forward_repo_args:\n tripleo_repos: {ocata: -b ocata current, pike: -b pike current, queens: -b\n queens current}\n fast_forward_repo_type: custom-script\n when: step|int == 3\n - debug: {msg: \'fast_forward_repo_type: {{ fast_forward_repo_type }} fast_forward_repo_args:\n {{ fast_forward_repo_args }}\'}\n when: step|int == 3\n - block:\n - git: {dest: /home/stack/tripleo-repos/, repo: \'https://github.com/openstack/tripleo-repos.git\'}\n name: clone tripleo-repos\n - args: {chdir: /home/stack/tripleo-repos/}\n command: python setup.py install\n name: install tripleo-repos\n - {command: \'tripleo-repos {{ fast_forward_repo_args.tripleo_repos[release]\n }}\', name: Enable tripleo-repos}\n when: [step|int == 3, ffu_packages_apply|bool, fast_forward_repo_type == \'tripleo-repos\']\n - block:\n - copy: {content: "#!/bin/bash\\nset -e\\necho \\"If you use FastForwardRepoType\\\n \\ \'custom-script\' you have to provide the upgrade repo script content.\\"\\\n \\necho \\"It will be installed as /root/ffu_upgrade_repo.sh on the node\\"\\\n \\necho \\"and passed the upstream name (ocata, pike, queens) of the release\\\n \\ as first argument\\"\\ncase $1 in\\n ocata)\\n subscription-manager\\\n \\ repos --disable=rhel-7-server-openstack-10-rpms\\n subscription-manager\\\n \\ repos --enable=rhel-7-server-openstack-11-rpms\\n ;;\\n pike)\\n \\\n \\ subscription-manager repos --disable=rhel-7-server-openstack-11-rpms\\n\\\n \\ subscription-manager repos --enable=rhel-7-server-openstack-12-rpms\\n\\\n \\ ;;\\n queens)\\n subscription-manager repos --disable=rhel-7-server-openstack-12-rpms\\n\\\n \\ subscription-manager repos --enable=rhel-7-server-openstack-13-rpms\\n\\\n \\ ;;\\n *)\\n echo \\"unknown release $1\\" >&2\\n exit 1\\nesac\\n",\n dest: /root/ffu_update_repo.sh, mode: 448}\n name: Create custom Script for upgrading repo.\n - {name: Execute custom script for upgrading repo., shell: \'/root/ffu_update_repo.sh\n {{release}}\'}\n when: [step|int == 3, ffu_packages_apply|bool, fast_forward_repo_type == \'custom-script\']\n role_data_global_config_settings: {}\n role_data_host_prep_tasks:\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/aodh, /var/log/containers/httpd/aodh-api]\n - copy: {content: \'Log files from aodh containers can be found under\n\n /var/log/containers/aodh and /var/log/containers/httpd/aodh-api.\n\n \', dest: /var/log/aodh/readme.txt}\n ignore_errors: true\n name: aodh logs readme\n - file: {path: /var/log/containers/aodh, state: directory}\n name: create persistent logs directory\n - file: {path: /var/log/containers/ceilometer, state: directory}\n name: create persistent logs directory\n - copy: {content: \'Log files from ceilometer containers can be found under\n\n /var/log/containers/ceilometer.\n\n \', dest: /var/log/ceilometer/readme.txt}\n ignore_errors: true\n name: ceilometer logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/cinder, /var/log/containers/httpd/cinder-api]\n - copy: {content: \'Log files from cinder containers can be found under\n\n /var/log/containers/cinder and /var/log/containers/httpd/cinder-api.\n\n \', dest: /var/log/cinder/readme.txt}\n ignore_errors: true\n name: cinder logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/var/log/containers/cinder]\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/var/log/containers/cinder, /var/lib/cinder]\n - file: {path: /etc/ceph, state: directory}\n name: ensure ceph configurations exist\n - name: cinder_enable_iscsi_backend fact\n set_fact: {cinder_enable_iscsi_backend: true}\n - args: {creates: /var/lib/cinder/cinder-volumes}\n command: dd if=/dev/zero of=/var/lib/cinder/cinder-volumes bs=1 count=0 seek=16384M\n name: cinder create LVM volume group dd\n when: cinder_enable_iscsi_backend\n - args: {creates: /dev/loop2, executable: /bin/bash}\n name: cinder create LVM volume group\n shell: "if ! losetup /dev/loop2; then\\n losetup /dev/loop2 /var/lib/cinder/cinder-volumes\\n\\\n fi\\nif ! pvdisplay | grep cinder-volumes; then\\n pvcreate /dev/loop2\\nfi\\n\\\n if ! vgdisplay | grep cinder-volumes; then\\n vgcreate cinder-volumes /dev/loop2\\n\\\n fi\\n"\n when: cinder_enable_iscsi_backend\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/glance]\n - copy: {content: \'Log files from glance containers can be found under\n\n /var/log/containers/glance.\n\n \', dest: /var/log/glance/readme.txt}\n ignore_errors: true\n name: glance logs readme\n - block:\n - name: null\n set_fact: {remote_file_path: /etc/glance/glance-metadata-file.conf}\n - file: {path: \'{{ remote_file_path }}\', state: touch}\n name: null\n - {register: file_path, stat: \'path="{{ remote_file_path }}"\'}\n - copy:\n content: {mount_point: /var/lib/glance/images, share_location: \'{{item.NETAPP_SHARE}}\',\n type: nfs}\n dest: \'{{ remote_file_path }}\'\n when: [file_path.stat.exists == true]\n with_items:\n - {NETAPP_SHARE: \'\'}\n - mount: name=/var/lib/glance/images src="{{item.NETAPP_SHARE}}" fstype=nfs4\n opts="{{item.NFS_OPTIONS}}" state=mounted\n name: null\n with_items:\n - {NETAPP_SHARE: \'\', NFS_OPTIONS: \'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0\'}\n name: Mount Netapp NFS\n vars: {netapp_nfs_backend_enable: false}\n when: netapp_nfs_backend_enable\n - mount: name=/var/lib/glance/images src="{{item.NFS_SHARE}}" fstype=nfs4 opts="{{item.NFS_OPTIONS}}"\n state=mounted\n name: Mount NFS on host\n vars: {nfs_backend_enable: false}\n when: [nfs_backend_enable]\n with_items:\n - {NFS_OPTIONS: \'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0\',\n NFS_SHARE: \'\'}\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/gnocchi, /var/log/containers/httpd/gnocchi-api]\n - copy: {content: \'Log files from gnocchi containers can be found under\n\n /var/log/containers/gnocchi and /var/log/containers/httpd/gnocchi-api.\n\n \', dest: /var/log/gnocchi/readme.txt}\n ignore_errors: true\n name: gnocchi logs readme\n - file: {path: /var/log/containers/gnocchi, state: directory}\n name: create persistent logs directory\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/var/lib/haproxy]\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/heat, /var/log/containers/httpd/heat-api]\n - copy: {content: \'Log files from heat containers can be found under\n\n /var/log/containers/heat and /var/log/containers/httpd/heat-api*.\n\n \', dest: /var/log/heat/readme.txt}\n ignore_errors: true\n name: heat logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/heat, /var/log/containers/httpd/heat-api-cfn]\n - file: {path: /var/log/containers/heat, state: directory}\n name: create persistent logs directory\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/horizon, /var/log/containers/httpd/horizon]\n - copy: {content: \'Log files from horizon containers can be found under\n\n /var/log/containers/horizon and /var/log/containers/httpd/horizon.\n\n \', dest: /var/log/horizon/readme.txt}\n ignore_errors: true\n name: horizon logs readme\n - {name: stat /lib/systemd/system/iscsid.socket, register: stat_iscsid_socket,\n stat: path=/lib/systemd/system/iscsid.socket}\n - {name: Stop and disable iscsid.socket service, service: name=iscsid.socket state=stopped\n enabled=no, when: stat_iscsid_socket.stat.exists}\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/keystone, /var/log/containers/httpd/keystone]\n - copy: {content: \'Log files from keystone containers can be found under\n\n /var/log/containers/keystone and /var/log/containers/httpd/keystone.\n\n \', dest: /var/log/keystone/readme.txt}\n ignore_errors: true\n name: keystone logs readme\n - copy: {content: \'Memcached container logs to stdout/stderr only.\n\n \', dest: /var/log/memcached-readme.txt}\n ignore_errors: true\n name: memcached logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/var/log/containers/mysql, /var/lib/mysql]\n - copy: {content: \'Log files from mysql containers can be found under\n\n /var/log/containers/mysql.\n\n \', dest: /var/log/mariadb/readme.txt}\n ignore_errors: true\n name: mysql logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/neutron, /var/log/containers/httpd/neutron-api]\n - copy: {content: \'Log files from neutron containers can be found under\n\n /var/log/containers/neutron and /var/log/containers/httpd/neutron-api.\n\n \', dest: /var/log/neutron/readme.txt}\n ignore_errors: true\n name: neutron logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/neutron]\n - file: {path: /var/lib/neutron, state: directory}\n name: create /var/lib/neutron\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/nova, /var/log/containers/httpd/nova-api]\n - copy: {content: \'Log files from nova containers can be found under\n\n /var/log/containers/nova and /var/log/containers/httpd/nova-*.\n\n \', dest: /var/log/nova/readme.txt}\n ignore_errors: true\n name: nova logs readme\n - file: {path: /var/log/containers/nova, state: directory}\n name: create persistent logs directory\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/nova, /var/log/containers/httpd/nova-placement]\n - file: {path: /var/lib/opendaylight/data/cache, state: absent}\n name: Delete cache folder\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/var/lib/opendaylight/snapshots, /var/lib/opendaylight/journal,\n /var/lib/opendaylight/data, /var/log/opendaylight, /var/log/containers/opendaylight]\n - copy: {content: \'Logs from opendaylight container can be found at /var/log/containers/opendaylight/karaf.log\n\n \', dest: /var/log/opendaylight/readme.txt}\n ignore_errors: true\n name: opendaylight logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent logs directory\n with_items: [/var/log/containers/panko, /var/log/containers/httpd/panko-api]\n - copy: {content: \'Log files from panko containers can be found under\n\n /var/log/containers/panko and /var/log/containers/httpd/panko-api.\n\n \', dest: /var/log/panko/readme.txt}\n ignore_errors: true\n name: panko logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/var/lib/rabbitmq, /var/log/containers/rabbitmq]\n - copy: {content: \'Log files from rabbitmq containers can be found under\n\n /var/log/containers/rabbitmq.\n\n \', dest: /var/log/rabbitmq/readme.txt}\n ignore_errors: true\n name: rabbitmq logs readme\n - {name: stop the Erlang port mapper on the host and make sure it cannot bind\n to the port used by container, shell: \'echo \'\'export ERL_EPMD_ADDRESS=127.0.0.1\'\'\n > /etc/rabbitmq/rabbitmq-env.conf\n\n echo \'\'export ERL_EPMD_PORT=4370\'\' >> /etc/rabbitmq/rabbitmq-env.conf\n\n for pid in $(pgrep epmd --ns 1 --nslist pid); do kill $pid; done\n\n \'}\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/var/lib/redis, /var/log/containers/redis, /var/run/redis]\n - copy: {content: \'Log files from redis containers can be found under\n\n /var/log/containers/redis.\n\n \', dest: /var/log/redis/readme.txt}\n ignore_errors: true\n name: redis logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/srv/node, /var/log/swift]\n - file: {dest: /var/log/containers/swift, src: /var/log/swift, state: link}\n name: Create swift logging symlink\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/srv/node, /var/log/swift, /var/log/containers]\n - name: Set swift_use_local_disks fact\n set_fact: {swift_use_local_disks: true}\n - file: {path: /srv/node/d1, state: directory}\n name: Create Swift d1 directory if needed\n when: swift_use_local_disks\n - copy: {content: \'Log files from swift containers can be found under\n\n /var/log/containers/swift and /var/log/containers/httpd/swift-*.\n\n \', dest: /var/log/swift/readme.txt}\n ignore_errors: true\n name: swift logs readme\n - filesystem: {dev: \'/dev/{{ item }}\', fstype: xfs, opts: -f -i size=1024}\n name: Format SwiftRawDisks\n with_items:\n - []\n - mount: {fstype: xfs, name: \'/srv/node/{{ item }}\', opts: noatime, src: \'/dev/{{\n item }}\', state: mounted}\n name: Mount devices defined in SwiftRawDisks\n with_items:\n - []\n role_data_kolla_config:\n /var/lib/kolla/config_files/aodh_api.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'aodh:aodh\', path: /var/log/aodh, recurse: true}\n /var/lib/kolla/config_files/aodh_evaluator.json:\n command: /usr/bin/aodh-evaluator\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'aodh:aodh\', path: /var/log/aodh, recurse: true}\n /var/lib/kolla/config_files/aodh_listener.json:\n command: /usr/bin/aodh-listener\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'aodh:aodh\', path: /var/log/aodh, recurse: true}\n /var/lib/kolla/config_files/aodh_notifier.json:\n command: /usr/bin/aodh-notifier\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'aodh:aodh\', path: /var/log/aodh, recurse: true}\n /var/lib/kolla/config_files/ceilometer_agent_central.json:\n command: /usr/bin/ceilometer-polling --polling-namespaces central --logfile\n /var/log/ceilometer/central.log\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/ceilometer_agent_notification.json:\n command: /usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-panko/*}\n permissions:\n - {owner: \'root:ceilometer\', path: /etc/panko, recurse: true}\n /var/lib/kolla/config_files/cinder_api.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'cinder:cinder\', path: /var/log/cinder, recurse: true}\n /var/lib/kolla/config_files/cinder_api_cron.json:\n command: /usr/sbin/crond -n\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'cinder:cinder\', path: /var/log/cinder, recurse: true}\n /var/lib/kolla/config_files/cinder_scheduler.json:\n command: /usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf\n --config-file /etc/cinder/cinder.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'cinder:cinder\', path: /var/log/cinder, recurse: true}\n /var/lib/kolla/config_files/cinder_volume.json:\n command: /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf\n --config-file /etc/cinder/cinder.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /etc/ceph/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-ceph/}\n - {dest: /etc/iscsi/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-iscsid/*}\n permissions:\n - {owner: \'cinder:cinder\', path: /var/log/cinder, recurse: true}\n /var/lib/kolla/config_files/clustercheck.json:\n command: /usr/sbin/xinetd -dontfork\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/glance_api.json:\n command: /usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf\n --config-file /etc/glance/glance-api.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /etc/ceph/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-ceph/}\n permissions:\n - {owner: \'glance:glance\', path: /var/lib/glance, recurse: true}\n - {owner: \'glance:glance\', path: /etc/ceph/ceph.client.openstack.keyring,\n perm: \'0600\'}\n /var/lib/kolla/config_files/glance_api_tls_proxy.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/gnocchi_api.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /etc/ceph/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-ceph/}\n permissions:\n - {owner: \'gnocchi:gnocchi\', path: /var/log/gnocchi, recurse: true}\n - {owner: \'gnocchi:gnocchi\', path: /etc/ceph/ceph.client.openstack.keyring,\n perm: \'0600\'}\n /var/lib/kolla/config_files/gnocchi_db_sync.json:\n command: /usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade\n --sacks-number=128\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /etc/ceph/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-ceph/}\n permissions:\n - {owner: \'gnocchi:gnocchi\', path: /var/log/gnocchi, recurse: true}\n - {owner: \'gnocchi:gnocchi\', path: /etc/ceph/ceph.client.openstack.keyring,\n perm: \'0600\'}\n /var/lib/kolla/config_files/gnocchi_metricd.json:\n command: /usr/bin/gnocchi-metricd\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /etc/ceph/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-ceph/}\n permissions:\n - {owner: \'gnocchi:gnocchi\', path: /var/log/gnocchi, recurse: true}\n - {owner: \'gnocchi:gnocchi\', path: /etc/ceph/ceph.client.openstack.keyring,\n perm: \'0600\'}\n /var/lib/kolla/config_files/gnocchi_statsd.json:\n command: /usr/bin/gnocchi-statsd\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /etc/ceph/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-ceph/}\n permissions:\n - {owner: \'gnocchi:gnocchi\', path: /var/log/gnocchi, recurse: true}\n - {owner: \'gnocchi:gnocchi\', path: /etc/ceph/ceph.client.openstack.keyring,\n perm: \'0600\'}\n /var/lib/kolla/config_files/haproxy.json:\n command: /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg\n config_files:\n - {dest: /, merge: true, optional: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /, merge: true, optional: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-tls/*}\n permissions:\n - {owner: \'haproxy:haproxy\', path: /var/lib/haproxy, recurse: true}\n - {optional: true, owner: \'haproxy:haproxy\', path: /etc/pki/tls/certs/haproxy/*,\n perm: \'0600\'}\n - {optional: true, owner: \'haproxy:haproxy\', path: /etc/pki/tls/private/haproxy/*,\n perm: \'0600\'}\n /var/lib/kolla/config_files/heat_api.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'heat:heat\', path: /var/log/heat, recurse: true}\n /var/lib/kolla/config_files/heat_api_cfn.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'heat:heat\', path: /var/log/heat, recurse: true}\n /var/lib/kolla/config_files/heat_api_cron.json:\n command: /usr/sbin/crond -n\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'heat:heat\', path: /var/log/heat, recurse: true}\n /var/lib/kolla/config_files/heat_engine.json:\n command: \'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf\n --config-file /etc/heat/heat.conf \'\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'heat:heat\', path: /var/log/heat, recurse: true}\n /var/lib/kolla/config_files/horizon.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'apache:apache\', path: /var/log/horizon/, recurse: true}\n - {owner: \'apache:apache\', path: /etc/openstack-dashboard/, recurse: true}\n - {owner: \'apache:apache\', path: /usr/share/openstack-dashboard/openstack_dashboard/local/,\n recurse: false}\n - {owner: \'apache:apache\', path: /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/,\n recurse: false}\n /var/lib/kolla/config_files/iscsid.json:\n command: /usr/sbin/iscsid -f\n config_files:\n - {dest: /etc/iscsi/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-iscsid/*}\n /var/lib/kolla/config_files/keystone.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/keystone_cron.json:\n command: /usr/sbin/crond -n\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'keystone:keystone\', path: /var/log/keystone, recurse: true}\n /var/lib/kolla/config_files/logrotate-crond.json:\n command: /usr/sbin/crond -s -n\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/mysql.json:\n command: /usr/sbin/pacemaker_remoted\n config_files:\n - {dest: /etc/libqb/force-filesystem-sockets, owner: root, perm: \'0644\', source: /dev/null}\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /, merge: true, optional: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-tls/*}\n permissions:\n - {owner: \'mysql:mysql\', path: /var/log/mysql, recurse: true}\n - {optional: true, owner: \'mysql:mysql\', path: /etc/pki/tls/certs/mysql.crt,\n perm: \'0600\'}\n - {optional: true, owner: \'mysql:mysql\', path: /etc/pki/tls/private/mysql.key,\n perm: \'0600\'}\n /var/lib/kolla/config_files/neutron_api.json:\n command: /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf\n --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf\n --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common\n --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'neutron:neutron\', path: /var/log/neutron, recurse: true}\n /var/lib/kolla/config_files/neutron_dhcp.json:\n command: /usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf\n --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini\n --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent\n --log-file=/var/log/neutron/dhcp-agent.log\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'neutron:neutron\', path: /var/log/neutron, recurse: true}\n - {owner: \'neutron:neutron\', path: /var/lib/neutron, recurse: true}\n - {owner: \'neutron:neutron\', path: /etc/pki/tls/certs/neutron.crt}\n - {owner: \'neutron:neutron\', path: /etc/pki/tls/private/neutron.key}\n /var/lib/kolla/config_files/neutron_metadata_agent.json:\n command: /usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf\n --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini\n --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent\n --log-file=/var/log/neutron/metadata-agent.log\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'neutron:neutron\', path: /var/log/neutron, recurse: true}\n - {owner: \'neutron:neutron\', path: /var/lib/neutron, recurse: true}\n /var/lib/kolla/config_files/neutron_server_tls_proxy.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/nova_api.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'nova:nova\', path: /var/log/nova, recurse: true}\n /var/lib/kolla/config_files/nova_api_cron.json:\n command: /usr/sbin/crond -n\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'nova:nova\', path: /var/log/nova, recurse: true}\n /var/lib/kolla/config_files/nova_conductor.json:\n command: \'/usr/bin/nova-conductor \'\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'nova:nova\', path: /var/log/nova, recurse: true}\n /var/lib/kolla/config_files/nova_consoleauth.json:\n command: \'/usr/bin/nova-consoleauth \'\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'nova:nova\', path: /var/log/nova, recurse: true}\n /var/lib/kolla/config_files/nova_metadata.json:\n command: \'/usr/bin/nova-api-metadata \'\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'nova:nova\', path: /var/log/nova, recurse: true}\n /var/lib/kolla/config_files/nova_placement.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'nova:nova\', path: /var/log/nova, recurse: true}\n /var/lib/kolla/config_files/nova_scheduler.json:\n command: \'/usr/bin/nova-scheduler \'\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'nova:nova\', path: /var/log/nova, recurse: true}\n /var/lib/kolla/config_files/nova_vnc_proxy.json:\n command: \'/usr/bin/nova-novncproxy --web /usr/share/novnc/ \'\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'nova:nova\', path: /var/log/nova, recurse: true}\n /var/lib/kolla/config_files/opendaylight_api.json:\n command: /opt/opendaylight/bin/karaf server\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'odl:odl\', path: /opt/opendaylight, recurse: true}\n /var/lib/kolla/config_files/panko_api.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'panko:panko\', path: /var/log/panko, recurse: true}\n /var/lib/kolla/config_files/rabbitmq.json:\n command: /usr/sbin/pacemaker_remoted\n config_files:\n - {dest: /etc/libqb/force-filesystem-sockets, owner: root, perm: \'0644\', source: /dev/null}\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /, merge: true, optional: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-tls/*}\n permissions:\n - {owner: \'rabbitmq:rabbitmq\', path: /var/lib/rabbitmq, recurse: true}\n - {owner: \'rabbitmq:rabbitmq\', path: /var/log/rabbitmq, recurse: true}\n - {optional: true, owner: \'rabbitmq:rabbitmq\', path: /etc/pki/tls/certs/rabbitmq.crt,\n perm: \'0600\'}\n - {optional: true, owner: \'rabbitmq:rabbitmq\', path: /etc/pki/tls/private/rabbitmq.key,\n perm: \'0600\'}\n /var/lib/kolla/config_files/redis.json:\n command: /usr/sbin/pacemaker_remoted\n config_files:\n - {dest: /etc/libqb/force-filesystem-sockets, owner: root, perm: \'0644\', source: /dev/null}\n - {dest: /, merge: true, optional: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /, merge: true, optional: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-tls/*}\n permissions:\n - {owner: \'redis:redis\', path: /var/run/redis, recurse: true}\n - {owner: \'redis:redis\', path: /var/lib/redis, recurse: true}\n - {owner: \'redis:redis\', path: /var/log/redis, recurse: true}\n - {optional: true, owner: \'redis:redis\', path: /etc/pki/tls/certs/redis.crt,\n perm: \'0600\'}\n - {optional: true, owner: \'redis:redis\', path: /etc/pki/tls/private/redis.key,\n perm: \'0600\'}\n /var/lib/kolla/config_files/redis_tls_proxy.json:\n command: stunnel /etc/stunnel/stunnel.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_account_auditor.json:\n command: /usr/bin/swift-account-auditor /etc/swift/account-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_account_reaper.json:\n command: /usr/bin/swift-account-reaper /etc/swift/account-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_account_replicator.json:\n command: /usr/bin/swift-account-replicator /etc/swift/account-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_account_server.json:\n command: /usr/bin/swift-account-server /etc/swift/account-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_container_auditor.json:\n command: /usr/bin/swift-container-auditor /etc/swift/container-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_container_replicator.json:\n command: /usr/bin/swift-container-replicator /etc/swift/container-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_container_server.json:\n command: /usr/bin/swift-container-server /etc/swift/container-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_container_updater.json:\n command: /usr/bin/swift-container-updater /etc/swift/container-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_object_auditor.json:\n command: /usr/bin/swift-object-auditor /etc/swift/object-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_object_expirer.json:\n command: /usr/bin/swift-object-expirer /etc/swift/object-expirer.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_object_replicator.json:\n command: /usr/bin/swift-object-replicator /etc/swift/object-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_object_server.json:\n command: /usr/bin/swift-object-server /etc/swift/object-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n permissions:\n - {owner: \'swift:swift\', path: /var/cache/swift, recurse: true}\n /var/lib/kolla/config_files/swift_object_updater.json:\n command: /usr/bin/swift-object-updater /etc/swift/object-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_proxy.json:\n command: /usr/bin/swift-proxy-server /etc/swift/proxy-server.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_proxy_tls_proxy.json:\n command: /usr/sbin/httpd -DFOREGROUND\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/swift_rsync.json:\n command: /usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n role_data_logging_groups: [root]\n role_data_logging_sources: []\n role_data_merged_config_settings:\n aodh::api::enable_proxy_headers_parsing: true\n aodh::api::gnocchi_external_project_owner: service\n aodh::api::host: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n aodh::api::service_name: httpd\n aodh::auth::auth_password: CzBTgJs3cf3DFGHBpK6umAgMj\n aodh::auth::auth_region: regionOne\n aodh::auth::auth_tenant_name: service\n aodh::auth::auth_url: http://172.17.1.10:5000\n aodh::db::database_connection: mysql+pymysql://aodh:CzBTgJs3cf3DFGHBpK6umAgMj@172.17.1.10/aodh?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n aodh::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n aodh::db::mysql::dbname: aodh\n aodh::db::mysql::host: 172.17.1.10\n aodh::db::mysql::password: CzBTgJs3cf3DFGHBpK6umAgMj\n aodh::db::mysql::user: aodh\n aodh::debug: true\n aodh::keystone::auth::admin_url: http://172.17.1.10:8042\n aodh::keystone::auth::internal_url: http://172.17.1.10:8042\n aodh::keystone::auth::password: CzBTgJs3cf3DFGHBpK6umAgMj\n aodh::keystone::auth::public_url: http://10.0.0.106:8042\n aodh::keystone::auth::region: regionOne\n aodh::keystone::auth::tenant: service\n aodh::keystone::authtoken::auth_uri: http://172.17.1.10:5000\n aodh::keystone::authtoken::auth_url: http://172.17.1.10:5000\n aodh::keystone::authtoken::password: CzBTgJs3cf3DFGHBpK6umAgMj\n aodh::keystone::authtoken::project_domain_name: Default\n aodh::keystone::authtoken::project_name: service\n aodh::keystone::authtoken::user_domain_name: Default\n aodh::notification_driver: messagingv2\n aodh::policy::policies: {}\n aodh::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n aodh::rabbit_port: 5672\n aodh::rabbit_use_ssl: \'False\'\n aodh::rabbit_userid: guest\n aodh::wsgi::apache::bind_host: internal_api\n aodh::wsgi::apache::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n aodh::wsgi::apache::ssl: false\n aodh::wsgi::apache::wsgi_process_display_name: aodh_wsgi\n aodh_redis_password: jv8TQJ7wGC7M7e6ez2GNPfke7\n apache::default_vhost: false\n apache::ip: internal_api\n apache::mod::prefork::maxclients: 256\n apache::mod::prefork::serverlimit: 256\n apache::mod::remoteip::proxy_ips: [\'%{hiera(\'\'apache_remote_proxy_ips_network\'\')}\']\n apache::server_signature: \'Off\'\n apache::server_tokens: Prod\n apache_remote_proxy_ips_network: internal_api_subnet\n ceilometer::agent::auth::auth_endpoint_type: internalURL\n ceilometer::agent::auth::auth_password: ZUMGXYGsUAsWVRjeZaJfeAv9y\n ceilometer::agent::auth::auth_project_domain_name: Default\n ceilometer::agent::auth::auth_region: regionOne\n ceilometer::agent::auth::auth_tenant_name: service\n ceilometer::agent::auth::auth_url: http://172.17.1.10:5000\n ceilometer::agent::auth::auth_user_domain_name: Default\n ceilometer::agent::notification::event_pipeline_publishers: [\'gnocchi://\', \'panko://\']\n ceilometer::agent::notification::manage_event_pipeline: true\n ceilometer::agent::notification::manage_pipeline: false\n ceilometer::agent::notification::pipeline_publishers: [\'gnocchi://\']\n ceilometer::agent::polling::manage_polling: false\n ceilometer::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n ceilometer::db::mysql::dbname: ceilometer\n ceilometer::db::mysql::host: 172.17.1.10\n ceilometer::db::mysql::password: ZUMGXYGsUAsWVRjeZaJfeAv9y\n ceilometer::db::mysql::user: ceilometer\n ceilometer::debug: true\n ceilometer::dispatcher::gnocchi::archive_policy: low\n ceilometer::dispatcher::gnocchi::filter_project: service\n ceilometer::dispatcher::gnocchi::resources_definition_file: gnocchi_resources.yaml\n ceilometer::dispatcher::gnocchi::url: http://172.17.1.10:8041\n ceilometer::host: \'%{::fqdn}\'\n ceilometer::keystone::auth::admin_url: http://172.17.1.10:8777\n ceilometer::keystone::auth::configure_endpoint: false\n ceilometer::keystone::auth::internal_url: http://172.17.1.10:8777\n ceilometer::keystone::auth::password: ZUMGXYGsUAsWVRjeZaJfeAv9y\n ceilometer::keystone::auth::public_url: http://10.0.0.106:8777\n ceilometer::keystone::auth::region: regionOne\n ceilometer::keystone::auth::tenant: service\n ceilometer::keystone::authtoken::auth_uri: http://172.17.1.10:5000\n ceilometer::keystone::authtoken::auth_url: http://172.17.1.10:5000\n ceilometer::keystone::authtoken::password: ZUMGXYGsUAsWVRjeZaJfeAv9y\n ceilometer::keystone::authtoken::project_domain_name: Default\n ceilometer::keystone::authtoken::project_name: service\n ceilometer::keystone::authtoken::user_domain_name: Default\n ceilometer::notification_driver: messagingv2\n ceilometer::rabbit_heartbeat_timeout_threshold: 60\n ceilometer::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n ceilometer::rabbit_port: 5672\n ceilometer::rabbit_use_ssl: \'False\'\n ceilometer::rabbit_userid: guest\n ceilometer::snmpd_readonly_user_password: e0e6f3b1f8575fd51ee080d6b2724feef235ed7e\n ceilometer::snmpd_readonly_username: ro_snmp_user\n ceilometer::telemetry_secret: ey9QkWYUbQMUv7hUXn2xzTrvM\n ceilometer_auth_enabled: true\n ceilometer_redis_password: jv8TQJ7wGC7M7e6ez2GNPfke7\n central_namespace: true\n cinder::api::bind_host: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n cinder::api::enable_proxy_headers_parsing: true\n cinder::api::nova_catalog_admin_info: compute:nova:adminURL\n cinder::api::nova_catalog_info: compute:nova:internalURL\n cinder::api::service_name: httpd\n cinder::backend_host: hostgroup\n cinder::ceilometer::notification_driver: messagingv2\n cinder::config:\n DEFAULT/swift_catalog_info: {value: \'object-store:swift:internalURL\'}\n cinder::cron::db_purge::age: \'30\'\n cinder::cron::db_purge::destination: /var/log/cinder/cinder-rowsflush.log\n cinder::cron::db_purge::hour: \'0\'\n cinder::cron::db_purge::minute: \'1\'\n cinder::cron::db_purge::month: \'*\'\n cinder::cron::db_purge::monthday: \'*\'\n cinder::cron::db_purge::user: cinder\n cinder::cron::db_purge::weekday: \'*\'\n cinder::database_connection: mysql+pymysql://cinder:jBfyuGFpWc3awtCvQwFuHPFxd@172.17.1.10/cinder?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n cinder::db::database_db_max_retries: -1\n cinder::db::database_max_retries: -1\n cinder::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n cinder::db::mysql::dbname: cinder\n cinder::db::mysql::host: 172.17.1.10\n cinder::db::mysql::password: jBfyuGFpWc3awtCvQwFuHPFxd\n cinder::db::mysql::user: cinder\n cinder::debug: true\n cinder::glance::glance_api_servers: http://172.17.1.10:9292\n cinder::keystone::auth::admin_url: http://172.17.1.10:8776/v1/%(tenant_id)s\n cinder::keystone::auth::admin_url_v2: http://172.17.1.10:8776/v2/%(tenant_id)s\n cinder::keystone::auth::admin_url_v3: http://172.17.1.10:8776/v3/%(tenant_id)s\n cinder::keystone::auth::internal_url: http://172.17.1.10:8776/v1/%(tenant_id)s\n cinder::keystone::auth::internal_url_v2: http://172.17.1.10:8776/v2/%(tenant_id)s\n cinder::keystone::auth::internal_url_v3: http://172.17.1.10:8776/v3/%(tenant_id)s\n cinder::keystone::auth::password: jBfyuGFpWc3awtCvQwFuHPFxd\n cinder::keystone::auth::public_url: http://10.0.0.106:8776/v1/%(tenant_id)s\n cinder::keystone::auth::public_url_v2: http://10.0.0.106:8776/v2/%(tenant_id)s\n cinder::keystone::auth::public_url_v3: http://10.0.0.106:8776/v3/%(tenant_id)s\n cinder::keystone::auth::region: regionOne\n cinder::keystone::auth::tenant: service\n cinder::keystone::authtoken::auth_uri: http://172.17.1.10:5000\n cinder::keystone::authtoken::auth_url: http://172.17.1.10:5000\n cinder::keystone::authtoken::password: jBfyuGFpWc3awtCvQwFuHPFxd\n cinder::keystone::authtoken::project_domain_name: Default\n cinder::keystone::authtoken::project_name: service\n cinder::keystone::authtoken::user_domain_name: Default\n cinder::policy::policies: {}\n cinder::rabbit_heartbeat_timeout_threshold: 60\n cinder::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n cinder::rabbit_port: 5672\n cinder::rabbit_use_ssl: \'False\'\n cinder::rabbit_userid: guest\n cinder::scheduler::scheduler_driver: cinder.scheduler.filter_scheduler.FilterScheduler\n cinder::volume::enabled: false\n cinder::volume::manage_service: false\n cinder::wsgi::apache::bind_host: internal_api\n cinder::wsgi::apache::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n cinder::wsgi::apache::ssl: false\n cinder::wsgi::apache::workers: \'%{::os_workers}\'\n corosync_ipv6: false\n corosync_token_timeout: 10000\n enable_fencing: false\n enable_galera: true\n enable_load_balancer: true\n enable_panko_expirer: true\n glance::api::authtoken::auth_uri: http://172.17.1.10:5000\n glance::api::authtoken::auth_url: http://172.17.1.10:5000\n glance::api::authtoken::password: xKsvVHmnh7bftvWNCfuHaZNUZ\n glance::api::authtoken::project_name: service\n glance::api::bind_host: internal_api\n glance::api::bind_port: \'9292\'\n glance::api::database_connection: mysql+pymysql://glance:xKsvVHmnh7bftvWNCfuHaZNUZ@172.17.1.10/glance?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n glance::api::debug: true\n glance::api::enable_proxy_headers_parsing: true\n glance::api::enable_v1_api: false\n glance::api::enable_v2_api: true\n glance::api::enabled_import_methods: [web-download]\n glance::api::image_member_quota: 128\n glance::api::os_region_name: regionOne\n glance::api::pipeline: keystone\n glance::api::show_image_direct_url: true\n glance::api::show_multiple_locations: false\n glance::api::sync_db: false\n glance::backend::rbd::rbd_store_ceph_conf: /etc/ceph/ceph.conf\n glance::backend::rbd::rbd_store_pool: images\n glance::backend::rbd::rbd_store_user: openstack\n glance::backend::swift::swift_store_auth_address: http://172.17.1.10:5000/v3\n glance::backend::swift::swift_store_auth_version: 3\n glance::backend::swift::swift_store_create_container_on_put: true\n glance::backend::swift::swift_store_key: xKsvVHmnh7bftvWNCfuHaZNUZ\n glance::backend::swift::swift_store_user: service:glance\n glance::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n glance::db::mysql::dbname: glance\n glance::db::mysql::host: 172.17.1.10\n glance::db::mysql::password: xKsvVHmnh7bftvWNCfuHaZNUZ\n glance::db::mysql::user: glance\n glance::keystone::auth::admin_url: http://172.17.1.10:9292\n glance::keystone::auth::internal_url: http://172.17.1.10:9292\n glance::keystone::auth::password: xKsvVHmnh7bftvWNCfuHaZNUZ\n glance::keystone::auth::public_url: http://10.0.0.106:9292\n glance::keystone::auth::region: regionOne\n glance::keystone::auth::tenant: service\n glance::keystone::authtoken::project_domain_name: Default\n glance::keystone::authtoken::user_domain_name: Default\n glance::notify::rabbitmq::notification_driver: messagingv2\n glance::notify::rabbitmq::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n glance::notify::rabbitmq::rabbit_port: 5672\n glance::notify::rabbitmq::rabbit_use_ssl: \'False\'\n glance::notify::rabbitmq::rabbit_userid: guest\n glance::policy::policies: {}\n glance_backend: swift\n glance_log_file: \'\'\n glance_notifier_strategy: noop\n gnocchi::api::enable_proxy_headers_parsing: true\n gnocchi::api::enabled: true\n gnocchi::api::service_name: httpd\n gnocchi::db::database_connection: mysql+pymysql://gnocchi:EMxec4K6kuZGjZmkMwu8ZMgzM@172.17.1.10/gnocchi?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n gnocchi::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n gnocchi::db::mysql::dbname: gnocchi\n gnocchi::db::mysql::host: 172.17.1.10\n gnocchi::db::mysql::password: EMxec4K6kuZGjZmkMwu8ZMgzM\n gnocchi::db::mysql::user: gnocchi\n gnocchi::db::sync::extra_opts: \' --sacks-number 128\'\n gnocchi::debug: true\n gnocchi::keystone::auth::admin_url: http://172.17.1.10:8041\n gnocchi::keystone::auth::internal_url: http://172.17.1.10:8041\n gnocchi::keystone::auth::password: EMxec4K6kuZGjZmkMwu8ZMgzM\n gnocchi::keystone::auth::public_url: http://10.0.0.106:8041\n gnocchi::keystone::auth::region: regionOne\n gnocchi::keystone::auth::tenant: service\n gnocchi::keystone::authtoken::auth_uri: http://172.17.1.10:5000\n gnocchi::keystone::authtoken::auth_url: http://172.17.1.10:5000\n gnocchi::keystone::authtoken::password: EMxec4K6kuZGjZmkMwu8ZMgzM\n gnocchi::keystone::authtoken::project_domain_name: Default\n gnocchi::keystone::authtoken::project_name: service\n gnocchi::keystone::authtoken::user_domain_name: Default\n gnocchi::metricd::metric_processing_delay: 30\n gnocchi::metricd::workers: \'%{::os_workers}\'\n gnocchi::policy::policies: {}\n gnocchi::statsd::archive_policy_name: low\n gnocchi::statsd::flush_delay: 10\n gnocchi::statsd::project_id: 6c38cd8d-099a-4cb2-aecf-17be688e8616\n gnocchi::statsd::resource_id: 0a8b55df-f90f-491c-8cb9-7cdecec6fc26\n gnocchi::statsd::user_id: 27c0d3f8-e7ee-42f0-8317-72237d1c5ae3\n gnocchi::storage::ceph::ceph_conffile: /etc/ceph/ceph.conf\n gnocchi::storage::ceph::ceph_keyring: /etc/ceph/ceph.client.openstack.keyring\n gnocchi::storage::ceph::ceph_pool: metrics\n gnocchi::storage::ceph::ceph_username: openstack\n gnocchi::storage::s3::s3_access_key_id: \'\'\n gnocchi::storage::s3::s3_endpoint_url: \'\'\n gnocchi::storage::s3::s3_region_name: \'\'\n gnocchi::storage::s3::s3_secret_access_key: \'\'\n gnocchi::storage::swift::swift_auth_version: 3\n gnocchi::storage::swift::swift_authurl: http://172.17.1.10:5000/v3\n gnocchi::storage::swift::swift_endpoint_type: internalURL\n gnocchi::storage::swift::swift_key: EMxec4K6kuZGjZmkMwu8ZMgzM\n gnocchi::storage::swift::swift_user: service:gnocchi\n gnocchi::wsgi::apache::bind_host: internal_api\n gnocchi::wsgi::apache::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n gnocchi::wsgi::apache::ssl: false\n gnocchi::wsgi::apache::wsgi_process_display_name: gnocchi_wsgi\n gnocchi_redis_password: jv8TQJ7wGC7M7e6ez2GNPfke7\n hacluster_pwd: EGYhJtaGVMtRm42X\n haproxy_docker: true\n heat::api::bind_host: internal_api\n heat::api::service_name: httpd\n heat::api_cfn::bind_host: internal_api\n heat::api_cfn::service_name: httpd\n heat::cron::purge_deleted::age: \'30\'\n heat::cron::purge_deleted::age_type: days\n heat::cron::purge_deleted::destination: /dev/null\n heat::cron::purge_deleted::ensure: present\n heat::cron::purge_deleted::hour: \'0\'\n heat::cron::purge_deleted::maxdelay: \'3600\'\n heat::cron::purge_deleted::minute: \'1\'\n heat::cron::purge_deleted::month: \'*\'\n heat::cron::purge_deleted::monthday: \'*\'\n heat::cron::purge_deleted::user: heat\n heat::cron::purge_deleted::weekday: \'*\'\n heat::database_connection: mysql+pymysql://heat:PX3UgYPjTePXuhGMjM9vZV4Jq@172.17.1.10/heat?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n heat::db::database_db_max_retries: -1\n heat::db::database_max_retries: -1\n heat::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n heat::db::mysql::dbname: heat\n heat::db::mysql::host: 172.17.1.10\n heat::db::mysql::password: PX3UgYPjTePXuhGMjM9vZV4Jq\n heat::db::mysql::user: heat\n heat::debug: true\n heat::enable_proxy_headers_parsing: true\n heat::engine::auth_encryption_key: bfrkgRaAnCj6HfbXuNwQXhCKy6drEYJ6\n heat::engine::configure_delegated_roles: false\n heat::engine::convergence_engine: true\n heat::engine::heat_metadata_server_url: http://10.0.0.106:8000\n heat::engine::heat_waitcondition_server_url: http://10.0.0.106:8000/v1/waitcondition\n heat::engine::max_nested_stack_depth: 6\n heat::engine::max_resources_per_stack: 1000\n heat::engine::plugin_dirs: []\n heat::engine::trusts_delegated_roles: []\n heat::heat_keystone_clients_url: http://10.0.0.106:5000\n heat::keystone::auth::admin_url: http://172.17.1.10:8004/v1/%(tenant_id)s\n heat::keystone::auth::internal_url: http://172.17.1.10:8004/v1/%(tenant_id)s\n heat::keystone::auth::password: PX3UgYPjTePXuhGMjM9vZV4Jq\n heat::keystone::auth::public_url: http://10.0.0.106:8004/v1/%(tenant_id)s\n heat::keystone::auth::region: regionOne\n heat::keystone::auth::tenant: service\n heat::keystone::auth_cfn::admin_url: http://172.17.1.10:8000/v1\n heat::keystone::auth_cfn::internal_url: http://172.17.1.10:8000/v1\n heat::keystone::auth_cfn::password: PX3UgYPjTePXuhGMjM9vZV4Jq\n heat::keystone::auth_cfn::public_url: http://10.0.0.106:8000/v1\n heat::keystone::auth_cfn::region: regionOne\n heat::keystone::auth_cfn::tenant: service\n heat::keystone::authtoken::auth_uri: http://172.17.1.10:5000\n heat::keystone::authtoken::auth_url: http://172.17.1.10:5000\n heat::keystone::authtoken::password: PX3UgYPjTePXuhGMjM9vZV4Jq\n heat::keystone::authtoken::project_domain_name: Default\n heat::keystone::authtoken::project_name: service\n heat::keystone::authtoken::user_domain_name: Default\n heat::keystone::domain::domain_admin: heat_stack_domain_admin\n heat::keystone::domain::domain_admin_email: heat_stack_domain_admin@localhost\n heat::keystone::domain::domain_name: heat_stack\n heat::keystone::domain::domain_password: 9wgDeEYVcvATDqUWh2zFgNqfr\n heat::keystone_ec2_uri: http://172.17.1.10:5000/v3/ec2tokens\n heat::max_json_body_size: 4194304\n heat::notification_driver: messagingv2\n heat::policy::policies: {}\n heat::rabbit_heartbeat_timeout_threshold: 60\n heat::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n heat::rabbit_port: 5672\n heat::rabbit_use_ssl: \'False\'\n heat::rabbit_userid: guest\n heat::rpc_response_timeout: 600\n heat::wsgi::apache_api::bind_host: internal_api\n heat::wsgi::apache_api::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n heat::wsgi::apache_api::ssl: false\n heat::wsgi::apache_api_cfn::bind_host: internal_api\n heat::wsgi::apache_api_cfn::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n heat::wsgi::apache_api_cfn::ssl: false\n heat::yaql_limit_iterators: 1000\n heat::yaql_memory_quota: 100000\n horizon::allowed_hosts: [\'*\']\n horizon::bind_address: internal_api\n horizon::cache_backend: django.core.cache.backends.memcached.MemcachedCache\n horizon::customization_module: \'\'\n horizon::disable_password_reveal: true\n horizon::disallow_iframe_embed: true\n horizon::django_debug: true\n horizon::django_session_engine: django.contrib.sessions.backends.cache\n horizon::enable_secure_proxy_ssl_header: true\n horizon::enforce_password_check: true\n horizon::horizon_ca: /etc/ipa/ca.crt\n horizon::keystone_url: http://172.17.1.10:5000\n horizon::listen_ssl: false\n horizon::password_validator: \'\'\n horizon::password_validator_help: \'\'\n horizon::secret_key: baVGHUaJBz\n horizon::secure_cookies: false\n horizon::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n horizon::vhost_extra_params:\n access_log_format: \'%a %l %u %t \\"%r\\" %>s %b \\"%%{}{Referer}i\\" \\"%%{}{User-Agent}i\\"\'\n add_listen: true\n options: [FollowSymLinks, MultiViews]\n priority: 10\n kernel_modules:\n nf_conntrack: {}\n nf_conntrack_proto_sctp: {}\n keystone::admin_bind_host: \'%{hiera(\'\'fqdn_ctlplane\'\')}\'\n keystone::admin_password: XxK3Mh947xh2TVyaJJWb7myna\n keystone::admin_port: \'35357\'\n keystone::admin_token: 9kk8pyHvmhGqnvzwq2mFXv6dc\n keystone::config::keystone_config:\n ec2/driver: {value: keystone.contrib.ec2.backends.sql.Ec2}\n keystone::credential_keys:\n /etc/keystone/credential-keys/0: {content: NPokwa2QcGznUuI_j2TUX42gVSpiXXSF_Yn7VwRO_UM=}\n /etc/keystone/credential-keys/1: {content: aaFEmpncIiMFDqgERZW2kseCkvBDTMtbVz_pMwn2V20=}\n keystone::cron::token_flush::destination: /var/log/keystone/keystone-tokenflush.log\n keystone::cron::token_flush::ensure: present\n keystone::cron::token_flush::hour: [\'*\']\n keystone::cron::token_flush::maxdelay: 0\n keystone::cron::token_flush::minute: [\'1\']\n keystone::cron::token_flush::month: [\'*\']\n keystone::cron::token_flush::monthday: [\'*\']\n keystone::cron::token_flush::user: keystone\n keystone::cron::token_flush::weekday: [\'*\']\n keystone::database_connection: mysql+pymysql://keystone:9kk8pyHvmhGqnvzwq2mFXv6dc@172.17.1.10/keystone?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n keystone::db::database_db_max_retries: -1\n keystone::db::database_max_retries: -1\n keystone::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n keystone::db::mysql::dbname: keystone\n keystone::db::mysql::host: 172.17.1.10\n keystone::db::mysql::password: 9kk8pyHvmhGqnvzwq2mFXv6dc\n keystone::db::mysql::user: keystone\n keystone::debug: true\n keystone::enable_credential_setup: true\n keystone::enable_fernet_setup: true\n keystone::enable_proxy_headers_parsing: true\n keystone::enable_ssl: false\n keystone::endpoint::admin_url: http://192.168.24.10:35357\n keystone::endpoint::internal_url: http://172.17.1.10:5000\n keystone::endpoint::public_url: http://10.0.0.106:5000\n keystone::endpoint::region: regionOne\n keystone::endpoint::version: \'\'\n keystone::fernet_keys:\n /etc/keystone/fernet-keys/0: {content: pvfG6wdm6OG7qoLMEFYllsxpwsUpL-wrpzmOuzzfnEM=}\n /etc/keystone/fernet-keys/1: {content: yeoGbzomaV_Y6WdIYPbJJt-4g91xeD-q3XFO3fupMtE=}\n keystone::fernet_max_active_keys: 5\n keystone::fernet_replace_keys: true\n keystone::notification_driver: messagingv2\n keystone::notification_format: basic\n keystone::policy::policies: {}\n keystone::public_bind_host: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n keystone::rabbit_heartbeat_timeout_threshold: 60\n keystone::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n keystone::rabbit_port: 5672\n keystone::rabbit_use_ssl: \'False\'\n keystone::rabbit_userid: guest\n keystone::roles::admin::admin_tenant: admin\n keystone::roles::admin::email: admin@example.com\n keystone::roles::admin::password: XxK3Mh947xh2TVyaJJWb7myna\n keystone::roles::admin::service_tenant: service\n keystone::service_name: httpd\n keystone::token_provider: fernet\n keystone::wsgi::apache::admin_bind_host: ctlplane\n keystone::wsgi::apache::admin_port: \'35357\'\n keystone::wsgi::apache::bind_host: internal_api\n keystone::wsgi::apache::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n keystone::wsgi::apache::servername_admin: \'%{hiera(\'\'fqdn_ctlplane\'\')}\'\n keystone::wsgi::apache::ssl: false\n keystone::wsgi::apache::threads: 1\n keystone::wsgi::apache::workers: \'%{::os_workers}\'\n keystone_enable_db_purge: true\n keystone_enable_member: true\n keystone_ssl_certificate: \'\'\n keystone_ssl_certificate_key: \'\'\n memcached::listen_ip: internal_api\n memcached::max_memory: 50%\n memcached::udp_port: 0\n memcached::verbosity: v\n memcached_ipv6: false\n memcached_network: internal_api_subnet\n mysql::server::manage_config_file: true\n mysql::server::package_name: mariadb-galera-server\n mysql::server::root_password: 7xm4XA2YHK\n mysql_bind_host: internal_api\n mysql_clustercheck_password: Y842JReAdAaXZwRHfsjTtdqgg\n mysql_ipv6: false\n mysql_max_connections: 4096\n neutron::agents::dhcp::debug: true\n neutron::agents::dhcp::dnsmasq_dns_servers: []\n neutron::agents::dhcp::enable_force_metadata: true\n neutron::agents::dhcp::enable_isolated_metadata: false\n neutron::agents::dhcp::enable_metadata_network: false\n neutron::agents::dhcp::interface_driver: neutron.agent.linux.interface.OVSInterfaceDriver\n neutron::agents::metadata::auth_password: anbEgsRDNBffKrcVkyZd2wPYr\n neutron::agents::metadata::auth_tenant: service\n neutron::agents::metadata::auth_url: http://172.17.1.10:5000\n neutron::agents::metadata::debug: true\n neutron::agents::metadata::metadata_host: \'%{hiera(\'\'cloud_name_internal_api\'\')}\'\n neutron::agents::metadata::metadata_ip: \'%{hiera(\'\'nova_metadata_vip\'\')}\'\n neutron::agents::metadata::metadata_protocol: http\n neutron::agents::metadata::shared_secret: 3BMbzPEunTfkgG4nPEG4ZKUy8\n neutron::agents::ml2::ovs::local_ip: tenant\n neutron::allow_overlapping_ips: true\n neutron::bind_host: internal_api\n neutron::core_plugin: ml2\n neutron::db::database_db_max_retries: -1\n neutron::db::database_max_retries: -1\n neutron::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n neutron::db::mysql::dbname: ovs_neutron\n neutron::db::mysql::host: 172.17.1.10\n neutron::db::mysql::password: anbEgsRDNBffKrcVkyZd2wPYr\n neutron::db::mysql::user: neutron\n neutron::db::sync::db_sync_timeout: 300\n neutron::db::sync::extra_params: \'\'\n neutron::debug: true\n neutron::dhcp_agent_notification: true\n neutron::dns_domain: openstacklocal\n neutron::global_physnet_mtu: 1500\n neutron::host: \'%{::fqdn}\'\n neutron::keystone::auth::admin_url: http://172.17.1.10:9696\n neutron::keystone::auth::internal_url: http://172.17.1.10:9696\n neutron::keystone::auth::password: anbEgsRDNBffKrcVkyZd2wPYr\n neutron::keystone::auth::public_url: http://10.0.0.106:9696\n neutron::keystone::auth::region: regionOne\n neutron::keystone::auth::tenant: service\n neutron::keystone::authtoken::auth_uri: http://172.17.1.10:5000\n neutron::keystone::authtoken::auth_url: http://172.17.1.10:5000\n neutron::keystone::authtoken::password: anbEgsRDNBffKrcVkyZd2wPYr\n neutron::keystone::authtoken::project_domain_name: Default\n neutron::keystone::authtoken::project_name: service\n neutron::keystone::authtoken::user_domain_name: Default\n neutron::notification_driver: messagingv2\n neutron::plugins::ml2::extension_drivers: [port_security]\n neutron::plugins::ml2::firewall_driver: iptables_hybrid\n neutron::plugins::ml2::flat_networks: [datacentre]\n neutron::plugins::ml2::mechanism_drivers: [opendaylight_v2]\n neutron::plugins::ml2::network_vlan_ranges: [\'datacentre:1:1000\']\n neutron::plugins::ml2::opendaylight::port_binding_controller: pseudo-agentdb-binding\n neutron::plugins::ml2::overlay_ip_version: 4\n neutron::plugins::ml2::tenant_network_types: [vxlan]\n neutron::plugins::ml2::tunnel_id_ranges: [\'1:4094\']\n neutron::plugins::ml2::type_drivers: [vxlan, vlan, flat, gre]\n neutron::plugins::ml2::vni_ranges: [\'1:4094\']\n neutron::plugins::ovs::opendaylight::allowed_network_types: [local, flat, vlan,\n vxlan, gre]\n neutron::plugins::ovs::opendaylight::enable_dpdk: false\n neutron::plugins::ovs::opendaylight::enable_hw_offload: false\n neutron::plugins::ovs::opendaylight::odl_password: redhat\n neutron::plugins::ovs::opendaylight::odl_username: odladmin\n neutron::plugins::ovs::opendaylight::provider_mappings: [\'datacentre:br-ex\']\n neutron::plugins::ovs::opendaylight::vhostuser_mode: server\n neutron::plugins::ovs::opendaylight::vhostuser_socket_dir: /var/lib/vhost_sockets\n neutron::policy::policies: {}\n neutron::purge_config: false\n neutron::rabbit_heartbeat_timeout_threshold: 60\n neutron::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n neutron::rabbit_port: 5672\n neutron::rabbit_use_ssl: \'False\'\n neutron::rabbit_user: guest\n neutron::server::allow_automatic_l3agent_failover: \'True\'\n neutron::server::database_connection: mysql+pymysql://neutron:anbEgsRDNBffKrcVkyZd2wPYr@172.17.1.10/ovs_neutron?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n neutron::server::enable_dvr: false\n neutron::server::enable_proxy_headers_parsing: true\n neutron::server::notifications::auth_url: http://172.17.1.10:5000\n neutron::server::notifications::endpoint_type: internal\n neutron::server::notifications::password: 6BkAm2KjhQcBQjbmFfcxWwJUq\n neutron::server::notifications::project_name: service\n neutron::server::notifications::tenant_name: service\n neutron::server::router_distributed: false\n neutron::server::sync_db: true\n neutron::service_plugins: [odl-router_v2, trunk]\n nova::api::api_bind_address: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n nova::api::default_floating_pool: public\n nova::api::enable_proxy_headers_parsing: true\n nova::api::enabled: true\n nova::api::instance_name_template: instance-%08x\n nova::api::metadata_listen: internal_api\n nova::api::neutron_metadata_proxy_shared_secret: 3BMbzPEunTfkgG4nPEG4ZKUy8\n nova::api::service_name: httpd\n nova::api::sync_db_api: true\n nova::api_database_connection: mysql+pymysql://nova_api:6BkAm2KjhQcBQjbmFfcxWwJUq@172.17.1.10/nova_api?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n nova::cell0_database_connection: mysql+pymysql://nova:6BkAm2KjhQcBQjbmFfcxWwJUq@172.17.1.10/nova_cell0?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n nova::cinder_catalog_info: volumev3:cinderv3:internalURL\n nova::cron::archive_deleted_rows::destination: /var/log/nova/nova-rowsflush.log\n nova::cron::archive_deleted_rows::hour: \'0\'\n nova::cron::archive_deleted_rows::max_rows: \'100\'\n nova::cron::archive_deleted_rows::minute: \'1\'\n nova::cron::archive_deleted_rows::month: \'*\'\n nova::cron::archive_deleted_rows::monthday: \'*\'\n nova::cron::archive_deleted_rows::until_complete: false\n nova::cron::archive_deleted_rows::user: nova\n nova::cron::archive_deleted_rows::weekday: \'*\'\n nova::database_connection: mysql+pymysql://nova:6BkAm2KjhQcBQjbmFfcxWwJUq@172.17.1.10/nova?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n nova::db::database_db_max_retries: -1\n nova::db::database_max_retries: -1\n nova::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n nova::db::mysql::dbname: nova\n nova::db::mysql::host: 172.17.1.10\n nova::db::mysql::password: 6BkAm2KjhQcBQjbmFfcxWwJUq\n nova::db::mysql::user: nova\n nova::db::mysql_api::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n nova::db::mysql_api::dbname: nova_api\n nova::db::mysql_api::host: 172.17.1.10\n nova::db::mysql_api::password: 6BkAm2KjhQcBQjbmFfcxWwJUq\n nova::db::mysql_api::setup_cell0: true\n nova::db::mysql_api::user: nova_api\n nova::db::mysql_placement::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n nova::db::mysql_placement::dbname: nova_placement\n nova::db::mysql_placement::host: 172.17.1.10\n nova::db::mysql_placement::password: 6BkAm2KjhQcBQjbmFfcxWwJUq\n nova::db::mysql_placement::user: nova_placement\n nova::db::sync::db_sync_timeout: 300\n nova::db::sync_api::db_sync_timeout: 300\n nova::debug: true\n nova::glance_api_servers: http://172.17.1.10:9292\n nova::host: \'%{::fqdn}\'\n nova::keystone::auth::admin_url: http://172.17.1.10:8774/v2.1\n nova::keystone::auth::internal_url: http://172.17.1.10:8774/v2.1\n nova::keystone::auth::password: 6BkAm2KjhQcBQjbmFfcxWwJUq\n nova::keystone::auth::public_url: http://10.0.0.106:8774/v2.1\n nova::keystone::auth::region: regionOne\n nova::keystone::auth::tenant: service\n nova::keystone::auth_placement::admin_url: http://172.17.1.10:8778/placement\n nova::keystone::auth_placement::internal_url: http://172.17.1.10:8778/placement\n nova::keystone::auth_placement::password: 6BkAm2KjhQcBQjbmFfcxWwJUq\n nova::keystone::auth_placement::public_url: http://10.0.0.106:8778/placement\n nova::keystone::auth_placement::region: regionOne\n nova::keystone::auth_placement::tenant: service\n nova::keystone::authtoken::auth_uri: http://172.17.1.10:5000\n nova::keystone::authtoken::auth_url: http://192.168.24.10:35357\n nova::keystone::authtoken::password: 6BkAm2KjhQcBQjbmFfcxWwJUq\n nova::keystone::authtoken::project_domain_name: Default\n nova::keystone::authtoken::project_name: service\n nova::keystone::authtoken::user_domain_name: Default\n nova::my_ip: internal_api\n nova::network::neutron::dhcp_domain: \'\'\n nova::network::neutron::neutron_auth_type: v3password\n nova::network::neutron::neutron_auth_url: http://192.168.24.10:35357/v3\n nova::network::neutron::neutron_ovs_bridge: br-int\n nova::network::neutron::neutron_password: anbEgsRDNBffKrcVkyZd2wPYr\n nova::network::neutron::neutron_project_name: service\n nova::network::neutron::neutron_region_name: regionOne\n nova::network::neutron::neutron_url: http://172.17.1.10:9696\n nova::network::neutron::neutron_username: neutron\n nova::notification_driver: messagingv2\n nova::notification_format: unversioned\n nova::notify_on_state_change: vm_and_task_state\n nova::placement::auth_url: http://172.17.1.10:5000\n nova::placement::os_interface: internal\n nova::placement::os_region_name: regionOne\n nova::placement::password: 6BkAm2KjhQcBQjbmFfcxWwJUq\n nova::placement::project_name: service\n nova::placement_database_connection: mysql+pymysql://nova_placement:6BkAm2KjhQcBQjbmFfcxWwJUq@172.17.1.10/nova_placement?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n nova::policy::policies: {}\n nova::purge_config: false\n nova::rabbit_heartbeat_timeout_threshold: 60\n nova::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n nova::rabbit_port: 5672\n nova::rabbit_use_ssl: \'False\'\n nova::rabbit_userid: guest\n nova::ram_allocation_ratio: \'1.0\'\n nova::scheduler::discover_hosts_in_cells_interval: -1\n nova::scheduler::filter::scheduler_available_filters: []\n nova::scheduler::filter::scheduler_default_filters: []\n nova::scheduler::filter::scheduler_max_attempts: 3\n nova::use_ipv6: false\n nova::vncproxy::common::vncproxy_host: 10.0.0.106\n nova::vncproxy::common::vncproxy_port: \'6080\'\n nova::vncproxy::common::vncproxy_protocol: http\n nova::vncproxy::enabled: true\n nova::vncproxy::host: internal_api\n nova::wsgi::apache_api::bind_host: internal_api\n nova::wsgi::apache_api::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n nova::wsgi::apache_api::ssl: false\n nova::wsgi::apache_placement::api_port: \'8778\'\n nova::wsgi::apache_placement::bind_host: internal_api\n nova::wsgi::apache_placement::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n nova::wsgi::apache_placement::ssl: false\n nova_enable_db_purge: true\n nova_wsgi_enabled: true\n ntp::iburst_enable: true\n \'ntp::maxpoll:\': 10\n \'ntp::minpoll:\': 6\n ntp::servers: [clock.redhat.com]\n opendaylight::extra_features: [odl-mdsal-trace, odl-netvirt-openstack, odl-jolokia]\n opendaylight::log_levels: {org.opendaylight.genius: DEBUG, org.opendaylight.netvirt: DEBUG}\n opendaylight::log_max_rollover: 50\n opendaylight::log_mechanism: console\n opendaylight::manage_repositories: false\n opendaylight::odl_bind_ip: internal_api\n opendaylight::odl_rest_port: \'8081\'\n opendaylight::password: redhat\n opendaylight::snat_mechanism: conntrack\n opendaylight::username: odladmin\n opendaylight_check_url: restconf/operational/network-topology:network-topology/topology/netvirt:1\n pacemaker::corosync::cluster_name: tripleo_cluster\n pacemaker::corosync::manage_fw: false\n pacemaker::corosync::settle_tries: 360\n pacemaker::resource_defaults::defaults:\n resource-stickiness: {value: INFINITY}\n panko::api::enable_proxy_headers_parsing: true\n panko::api::event_time_to_live: \'86400\'\n panko::api::host: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n panko::api::service_name: httpd\n panko::auth::auth_password: rxzxqxVRqj9egU8HnnR44EDNu\n panko::auth::auth_region: regionOne\n panko::auth::auth_tenant_name: service\n panko::auth::auth_url: http://172.17.1.10:5000\n panko::db::database_connection: mysql+pymysql://panko:rxzxqxVRqj9egU8HnnR44EDNu@172.17.1.10/panko?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n panko::db::mysql::allowed_hosts: [\'%\', \'%{hiera(\'\'mysql_bind_host\'\')}\']\n panko::db::mysql::dbname: panko\n panko::db::mysql::host: 172.17.1.10\n panko::db::mysql::password: rxzxqxVRqj9egU8HnnR44EDNu\n panko::db::mysql::user: panko\n panko::debug: true\n panko::expirer::hour: \'0\'\n panko::expirer::minute: \'1\'\n panko::expirer::month: \'*\'\n panko::expirer::monthday: \'*\'\n panko::expirer::weekday: \'*\'\n panko::keystone::auth::admin_url: http://172.17.1.10:8977\n panko::keystone::auth::internal_url: http://172.17.1.10:8977\n panko::keystone::auth::password: rxzxqxVRqj9egU8HnnR44EDNu\n panko::keystone::auth::public_url: http://10.0.0.106:8977\n panko::keystone::auth::region: regionOne\n panko::keystone::auth::tenant: service\n panko::keystone::authtoken::auth_uri: http://172.17.1.10:5000\n panko::keystone::authtoken::auth_url: http://172.17.1.10:5000\n panko::keystone::authtoken::password: rxzxqxVRqj9egU8HnnR44EDNu\n panko::keystone::authtoken::project_domain_name: Default\n panko::keystone::authtoken::project_name: service\n panko::keystone::authtoken::user_domain_name: Default\n panko::policy::policies: {}\n panko::wsgi::apache::bind_host: internal_api\n panko::wsgi::apache::servername: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n panko::wsgi::apache::ssl: false\n rabbit_ipv6: false\n rabbitmq::default_pass: weVyVyHzxXn9URCQNmHmUCsYg\n rabbitmq::default_user: guest\n rabbitmq::delete_guest_user: false\n rabbitmq::erlang_cookie: wMGzfECCXTCuVVgpTMBH\n rabbitmq::file_limit: 65536\n rabbitmq::interface: internal_api\n rabbitmq::nr_ha_queues: -1\n rabbitmq::package_provider: yum\n rabbitmq::package_source: undef\n rabbitmq::port: 5672\n rabbitmq::repos_ensure: false\n rabbitmq::service_manage: false\n rabbitmq::ssl: false\n rabbitmq::ssl_depth: 1\n rabbitmq::ssl_erl_dist: false\n rabbitmq::ssl_interface: internal_api\n rabbitmq::ssl_only: false\n rabbitmq::ssl_port: 5672\n rabbitmq::tcp_keepalive: true\n rabbitmq::wipe_db_on_cookie_change: true\n rabbitmq_config_variables: {cluster_partition_handling: ignore, loopback_users: \'[]\',\n queue_master_locator: <<"min-masters">>}\n rabbitmq_environment: {NODE_IP_ADDRESS: \'\', NODE_PORT: \'\', RABBITMQ_NODENAME: \'rabbit@%{::hostname}\',\n RABBITMQ_SERVER_ERL_ARGS: \'"+K true +P 1048576 -kernel inet_default_connect_options\n [{nodelay,true}]"\', export ERL_EPMD_ADDRESS: \'%{hiera(\'\'rabbitmq::interface\'\')}\'}\n rabbitmq_kernel_variables: {inet_dist_listen_max: \'25672\', inet_dist_listen_min: \'25672\',\n net_ticktime: 15}\n redis::bind: internal_api\n redis::managed_by_cluster_manager: true\n redis::masterauth: jv8TQJ7wGC7M7e6ez2GNPfke7\n redis::notify_service: false\n redis::port: 6379\n redis::requirepass: jv8TQJ7wGC7M7e6ez2GNPfke7\n redis::sentinel::master_name: \'%{hiera(\'\'bootstrap_nodeid\'\')}\'\n redis::sentinel::notification_script: /usr/local/bin/redis-notifications.sh\n redis::sentinel::redis_host: \'%{hiera(\'\'bootstrap_nodeid_ip\'\')}\'\n redis::sentinel::sentinel_bind: internal_api\n redis::sentinel_auth_pass: jv8TQJ7wGC7M7e6ez2GNPfke7\n redis::service_manage: false\n redis::ulimit: \'10240\'\n redis_ipv6: false\n snmp::agentaddress: [\'udp:161\', \'udp6:[::1]:161\']\n snmp::snmpd_options: -LS0-5d\n snmpd_network: internal_api_subnet\n swift::keystone::auth::admin_url: http://172.17.3.10:8080\n swift::keystone::auth::admin_url_s3: http://172.17.3.10:8080\n swift::keystone::auth::configure_s3_endpoint: false\n swift::keystone::auth::internal_url: http://172.17.3.10:8080/v1/AUTH_%(tenant_id)s\n swift::keystone::auth::internal_url_s3: http://172.17.3.10:8080\n swift::keystone::auth::operator_roles: [admin, swiftoperator, ResellerAdmin]\n swift::keystone::auth::password: 2Q6kxeNrvczRgVewcjWhEwnaJ\n swift::keystone::auth::public_url: http://10.0.0.106:8080/v1/AUTH_%(tenant_id)s\n swift::keystone::auth::public_url_s3: http://10.0.0.106:8080\n swift::keystone::auth::region: regionOne\n swift::keystone::auth::tenant: service\n swift::proxy::account_autocreate: true\n swift::proxy::authtoken::auth_uri: http://172.17.1.10:5000\n swift::proxy::authtoken::auth_url: http://172.17.1.10:5000\n swift::proxy::authtoken::password: 2Q6kxeNrvczRgVewcjWhEwnaJ\n swift::proxy::authtoken::project_name: service\n swift::proxy::keystone::operator_roles: [admin, swiftoperator, ResellerAdmin]\n swift::proxy::node_timeout: 60\n swift::proxy::pipeline: [catch_errors, healthcheck, proxy-logging, cache, ratelimit,\n bulk, tempurl, formpost, authtoken, keystone, staticweb, copy, container_quotas,\n account_quotas, slo, dlo, versioned_writes, proxy-logging, proxy-server]\n swift::proxy::port: \'8080\'\n swift::proxy::proxy_local_net_ip: storage\n swift::proxy::staticweb::url_base: http://10.0.0.106:8080\n swift::proxy::versioned_writes::allow_versioned_writes: true\n swift::proxy::workers: auto\n swift::storage::all::account_pipeline: [healthcheck, account-server]\n swift::storage::all::account_server_workers: auto\n swift::storage::all::container_pipeline: [healthcheck, container-server]\n swift::storage::all::container_server_workers: auto\n swift::storage::all::incoming_chmod: Du=rwx,g=rx,o=rx,Fu=rw,g=r,o=r\n swift::storage::all::mount_check: false\n swift::storage::all::object_pipeline: [healthcheck, recon, object-server]\n swift::storage::all::object_server_workers: auto\n swift::storage::all::outgoing_chmod: Du=rwx,g=rx,o=rx,Fu=rw,g=r,o=r\n swift::storage::all::storage_local_net_ip: storage_mgmt\n swift::storage::disks::args: {}\n swift::swift_hash_path_suffix: fyaC6RwBa3bC93pAgcmRf3CXd\n sysctl_settings:\n fs.inotify.max_user_instances: {value: 1024}\n fs.suid_dumpable: {value: 0}\n kernel.dmesg_restrict: {value: 1}\n kernel.pid_max: {value: 1048576}\n net.core.netdev_max_backlog: {value: 10000}\n net.ipv4.conf.all.arp_accept: {value: 1}\n net.ipv4.conf.all.log_martians: {value: 1}\n net.ipv4.conf.all.secure_redirects: {value: 0}\n net.ipv4.conf.all.send_redirects: {value: 0}\n net.ipv4.conf.default.accept_redirects: {value: 0}\n net.ipv4.conf.default.log_martians: {value: 1}\n net.ipv4.conf.default.secure_redirects: {value: 0}\n net.ipv4.conf.default.send_redirects: {value: 0}\n net.ipv4.ip_forward: {value: 1}\n net.ipv4.neigh.default.gc_thresh1: {value: 1024}\n net.ipv4.neigh.default.gc_thresh2: {value: 2048}\n net.ipv4.neigh.default.gc_thresh3: {value: 4096}\n net.ipv4.tcp_keepalive_intvl: {value: 1}\n net.ipv4.tcp_keepalive_probes: {value: 5}\n net.ipv4.tcp_keepalive_time: {value: 5}\n net.ipv6.conf.all.accept_ra: {value: 0}\n net.ipv6.conf.all.accept_redirects: {value: 0}\n net.ipv6.conf.all.autoconf: {value: 0}\n net.ipv6.conf.all.disable_ipv6: {value: 0}\n net.ipv6.conf.default.accept_ra: {value: 0}\n net.ipv6.conf.default.accept_redirects: {value: 0}\n net.ipv6.conf.default.autoconf: {value: 0}\n net.ipv6.conf.default.disable_ipv6: {value: 0}\n net.netfilter.nf_conntrack_max: {value: 500000}\n net.nf_conntrack_max: {value: 500000}\n timezone::timezone: Europe/London\n tripleo.aodh_api.firewall_rules:\n 128 aodh-api:\n dport: [8042, 13042]\n tripleo.cinder_api.firewall_rules:\n 119 cinder:\n dport: [8776, 13776]\n tripleo.cinder_volume.firewall_rules:\n 120 iscsi initiator: {dport: 3260}\n tripleo.glance_api.firewall_rules:\n 112 glance_api:\n dport: [9292, 13292]\n tripleo.gnocchi_api.firewall_rules:\n 129 gnocchi-api:\n dport: [8041, 13041]\n tripleo.gnocchi_statsd.firewall_rules:\n 140 gnocchi-statsd: {dport: 8125, proto: udp}\n tripleo.haproxy.firewall_rules:\n 107 haproxy stats: {dport: 1993}\n tripleo.heat_api.firewall_rules:\n 125 heat_api:\n dport: [8004, 13004]\n tripleo.heat_api_cfn.firewall_rules:\n 125 heat_cfn:\n dport: [8000, 13800]\n tripleo.horizon.firewall_rules:\n 127 horizon:\n dport: [80, 443]\n tripleo.keystone.firewall_rules:\n 111 keystone:\n dport: [5000, 13000, \'35357\']\n tripleo.memcached.firewall_rules:\n 121 memcached: {dport: 11211, proto: tcp, source: \'%{hiera(\'\'memcached_network\'\')}\'}\n tripleo.mysql.firewall_rules:\n 104 mysql galera-bundle:\n dport: [873, 3123, 3306, 4444, 4567, 4568, 9200]\n tripleo.neutron_api.firewall_rules:\n 114 neutron api:\n dport: [9696, 13696]\n tripleo.neutron_dhcp.firewall_rules:\n 115 neutron dhcp input: {dport: 67, proto: udp}\n 116 neutron dhcp output: {chain: OUTPUT, dport: 68, proto: udp}\n tripleo.nova_api.firewall_rules:\n 113 nova_api:\n dport: [8774, 13774, 8775]\n tripleo.nova_placement.firewall_rules:\n 138 nova_placement:\n dport: [8778, 13778]\n tripleo.nova_vnc_proxy.firewall_rules:\n 137 nova_vnc_proxy:\n dport: [6080, 13080]\n tripleo.ntp.firewall_rules:\n 105 ntp: {dport: 123, proto: udp}\n tripleo.opendaylight_api.firewall_rules:\n 137 opendaylight api:\n dport: [\'8081\', 6640, 6653, 2550, 8185]\n tripleo.opendaylight_ovs.firewall_rules:\n 118 neutron vxlan networks: {dport: 4789, proto: udp}\n 136 neutron gre networks: {proto: gre}\n tripleo.pacemaker.firewall_rules:\n 130 pacemaker tcp:\n dport: [2224, 3121, 21064]\n proto: tcp\n 131 pacemaker udp: {dport: 5405, proto: udp}\n tripleo.panko_api.firewall_rules:\n 140 panko-api:\n dport: [8977, 13977]\n tripleo.rabbitmq.firewall_rules:\n 109 rabbitmq-bundle:\n dport: [3122, 4369, 5672, 25672]\n tripleo.redis.firewall_rules:\n 108 redis-bundle:\n dport: [3124, 6379, 26379]\n tripleo.snmp.firewall_rules:\n 124 snmp: {dport: 161, proto: udp, source: \'%{hiera(\'\'snmpd_network\'\')}\'}\n tripleo.swift_proxy.firewall_rules:\n 122 swift proxy:\n dport: [8080, 13808]\n tripleo.swift_storage.firewall_rules:\n 123 swift storage:\n dport: [873, 6000, 6001, 6002]\n tripleo::fencing::config: {}\n tripleo::firewall::manage_firewall: true\n tripleo::firewall::purge_firewall_rules: false\n tripleo::glance::nfs_mount::edit_fstab: false\n tripleo::glance::nfs_mount::options: _netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0\n tripleo::glance::nfs_mount::share: \'\'\n tripleo::haproxy::ca_bundle: /etc/ipa/ca.crt\n tripleo::haproxy::crl_file: null\n tripleo::haproxy::haproxy_log_address: /dev/log\n tripleo::haproxy::haproxy_service_manage: false\n tripleo::haproxy::haproxy_stats: true\n tripleo::haproxy::haproxy_stats_password: FZRXHrrCRZcvdmsQ9P9sjKWJj\n tripleo::haproxy::haproxy_stats_user: admin\n tripleo::haproxy::mysql_clustercheck: true\n tripleo::haproxy::redis_password: jv8TQJ7wGC7M7e6ez2GNPfke7\n tripleo::packages::enable_install: false\n tripleo::profile::base::cinder::cinder_enable_db_purge: true\n tripleo::profile::base::cinder::volume::cinder_enable_iscsi_backend: true\n tripleo::profile::base::cinder::volume::cinder_enable_nfs_backend: false\n tripleo::profile::base::cinder::volume::cinder_enable_rbd_backend: false\n tripleo::profile::base::cinder::volume::iscsi::cinder_iscsi_address: storage\n tripleo::profile::base::cinder::volume::iscsi::cinder_iscsi_helper: lioadm\n tripleo::profile::base::cinder::volume::iscsi::cinder_iscsi_protocol: iscsi\n tripleo::profile::base::cinder::volume::iscsi::cinder_lvm_loop_device_size: 16384\n tripleo::profile::base::cinder::volume::nfs::cinder_nas_secure_file_operations: \'False\'\n tripleo::profile::base::cinder::volume::nfs::cinder_nas_secure_file_permissions: \'False\'\n tripleo::profile::base::cinder::volume::nfs::cinder_nfs_mount_options: \'\'\n tripleo::profile::base::cinder::volume::nfs::cinder_nfs_servers: []\n tripleo::profile::base::cinder::volume::rbd::cinder_rbd_ceph_conf: /etc/ceph/ceph.conf\n tripleo::profile::base::cinder::volume::rbd::cinder_rbd_extra_pools: []\n tripleo::profile::base::cinder::volume::rbd::cinder_rbd_pool_name: volumes\n tripleo::profile::base::cinder::volume::rbd::cinder_rbd_user_name: openstack\n tripleo::profile::base::database::mysql::bind_address: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n tripleo::profile::base::database::mysql::client::enable_ssl: false\n tripleo::profile::base::database::mysql::client::mysql_client_bind_address: internal_api\n tripleo::profile::base::database::mysql::client::ssl_ca: /etc/ipa/ca.crt\n tripleo::profile::base::database::mysql::client_bind_address: internal_api\n tripleo::profile::base::database::mysql::generate_dropin_file_limit: true\n tripleo::profile::base::database::redis::tls_proxy_bind_ip: internal_api\n tripleo::profile::base::database::redis::tls_proxy_fqdn: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n tripleo::profile::base::database::redis::tls_proxy_port: 6379\n tripleo::profile::base::docker::additional_sockets: [/var/lib/openstack/docker.sock]\n tripleo::profile::base::docker::configure_network: true\n tripleo::profile::base::docker::debug: true\n tripleo::profile::base::docker::docker_options: --log-driver=journald --signature-verification=false\n --iptables=false --live-restore\n tripleo::profile::base::docker::insecure_registries: [\'192.168.24.1:8787\']\n tripleo::profile::base::docker::network_options: --bip=172.31.0.1/24\n tripleo::profile::base::glance::api::glance_nfs_enabled: false\n tripleo::profile::base::glance::api::tls_proxy_bind_ip: internal_api\n tripleo::profile::base::glance::api::tls_proxy_fqdn: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n tripleo::profile::base::glance::api::tls_proxy_port: \'9292\'\n tripleo::profile::base::gnocchi::api::gnocchi_backend: swift\n tripleo::profile::base::gnocchi::api::incoming_storage_driver: redis\n tripleo::profile::base::haproxy::certificates_specs: {}\n tripleo::profile::base::heat::manage_db_purge: true\n tripleo::profile::base::keystone::ceilometer_notification_topics: [notifications]\n tripleo::profile::base::keystone::extra_notification_topics: []\n tripleo::profile::base::keystone::heat_admin_domain: heat_stack\n tripleo::profile::base::keystone::heat_admin_email: heat_stack_domain_admin@localhost\n tripleo::profile::base::keystone::heat_admin_password: 9wgDeEYVcvATDqUWh2zFgNqfr\n tripleo::profile::base::keystone::heat_admin_user: heat_stack_domain_admin\n tripleo::profile::base::lvm::enable_udev: false\n tripleo::profile::base::neutron::dhcp_agent_wrappers::dnsmasq_image: 192.168.24.1:8787/rhosp13/openstack-neutron-dhcp-agent:2018-07-13.1\n tripleo::profile::base::neutron::dhcp_agent_wrappers::dnsmasq_process_wrapper: /var/lib/neutron/dnsmasq_wrapper\n tripleo::profile::base::neutron::dhcp_agent_wrappers::enable_dnsmasq_wrapper: true\n tripleo::profile::base::neutron::dhcp_agent_wrappers::enable_haproxy_wrapper: true\n tripleo::profile::base::neutron::dhcp_agent_wrappers::haproxy_image: 192.168.24.1:8787/rhosp13/openstack-neutron-dhcp-agent:2018-07-13.1\n tripleo::profile::base::neutron::dhcp_agent_wrappers::haproxy_process_wrapper: /var/lib/neutron/dhcp_haproxy_wrapper\n tripleo::profile::base::neutron::plugins::ovs::opendaylight::vhostuser_socket_group: qemu\n tripleo::profile::base::neutron::plugins::ovs::opendaylight::vhostuser_socket_user: qemu\n tripleo::profile::base::neutron::server::l3_ha_override: \'\'\n tripleo::profile::base::neutron::server::tls_proxy_bind_ip: internal_api\n tripleo::profile::base::neutron::server::tls_proxy_fqdn: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n tripleo::profile::base::neutron::server::tls_proxy_port: \'9696\'\n tripleo::profile::base::pacemaker::remote_authkey: y4KQqvu9wPzQBRZhYd4rU87e8sDHzd8RcQuWBDcbN7QAFqAuuXmaEs4wA6CNhbFRnAYbqRMVybQtMY8ghJKxEbn6tyxRbaKGtnsmWn7XyYGtKedWc9298WxAMQ2vRzuTaj4tRNVYvMbmfpKZTZARAVGmsYPR47ahNKBUFWkfqJR7dmXCjK6QdAYXktnkCXyxu8ZTYhHpzDUfTM2UaPxYkpXNZHkMwzDjVuKcQGNfbsMyJBTCsM2GhzaYxahnaNeBk7zxcUr6W7KJPZhZfRdcDyXrATKnjnGvbRVaqd2uRuG4dZaHZEAEJAtB6TqknbsssFnjEm4scUcsBvpNTqRq7kFBZNcKvGwvypFBcZRTvkj7vYRRpfy4uMGU32YYDUgxtpkJA9PJ8R4H2euH6RhgRXDZjXnw8JbFDE7XpDYdB2DVWMeA7XVPJaxQWb4QzNkpGHkKRxURncMc38RDqMBdkfAAkFxB34e2TPMBFJPM89NPkNPxbGPGrwjbJQFHQWFXG6zuh3AQFFXTU6TsbXDVP3hmMpCtjjZdakyb2tJf3dWXH4FJXsgmTxUz6d8DbDH6AmwyWxNYzng4sUPgQHpxhjh6syBURXUCphjjf3DGndbUUTT6paw8vnERsnpWDEUvbafrKXuZXJYEMB6EA3KjqRdra9nhTrYusybqfHjQRNP6tKFEuz3kbMHBaNXmyy28dVCFAbTCJkfuH7p4j2TaAezaFv4VHRYjWNbN3vuHgDAMT6vrNw7xukcvfWmef9e8DgZaxXdeyWgmxPWfZKfEGweXVqZkcFUuhcxYNmN7tdfdnakyu9XHayeEYYPEXWKDYDVynTrnBdrh3tY2TT4YRcwwcNFEKsex2NF8QCPNuY8HwRMDAuFAc7E786XyFcu9VvCAcjAB9nyaP7c2XRuuUDKVHwtysNU4UCcJGrpu2RZVQUgGgPNmFvjtcZKCDncGDpHwshc7kYkXNyPb64yBeVvjnKPtAV2Qj9fYWuQ8hw9UNTqVrYrNF4XjD3mMTqre6W4mWMtmEA3nWw7Aq2WWJwE9vFdaufPEkvgUpwWrveRmJxKDmAZuR83rWEYcCG6Bzj6BqgYPxh7VMsuBvVRg3B8tMtFyypqrtKKN4ewJkWyrZXWRtgd9P7pePhqMEBv8sDgmBXZ67uJ6Am9M2yWfA2UJxCp7Mj6hXpxefrKaU2hcbun9g846UurmTHMgcPWH4QBVkB732uttkpCU7XKkyysCUDB2KyxNz4zX8ek3tFbT3AVsMmNBa9cTXrmymCK7GZVBA8ZDv6T3sdeBh47a7YFGr3JMZJfzGC8vYAagkXNawNRZeU8zwPwwrwjKRhgDcAhVcT6QsPMCGUWsfwajzXpUgMuRwYfbw7MuMmn4KN78pGxnFEvy3ePdm7jukNxp2FEhVbEAewjB6eUrGbz2zKVntfdB4wXmXFt6Kevk8zG2PxGJdZJJeRdYsYWzgdVYaDDfhTHU4FN3sB7jdHx22YP4dFnka8ce6kdxtEZ6gyYywDwDqCMJqRNUdteXXfBXTMTBrNxPYdc9zz7tJCWM66M3RNBW4PzFNUFEeMEDPVgwpRjYeXRjWwPtcunFx8wDrBEanEFkFYB8ND2M6cP8tVjGMsBr34VeqZQvqUebDmPjKEjfe9UtkCWkxBuRaQyreNXeVvzpGD9j4xC2quqYgpRBW3XrEyz2uuce3vQpcaH92nTVbcfxwG6eUTwbPzzxZxjsjmrHQv3jXqdkmWF3utXvNzWz3FxqaVA2gqpF83radJebCUqcmab9VZb6mQY6WKM3ypPsQmHrgtMRcaYXaByTRF2HWxQxBZZfmhVFMTA9Dw3dZJxYqWtrZd6QzjKVPz3FN7fsV3w8T8NnqdcBUjqEmXHZ8q8umW7MMVgRssg8zN8D6RJvppAhKZEjkUDYfBKXqcn3mCdxueAm8FWQR34KmzxdKj8XDeX39Nv2CnWkWyRwA3A9qyvNWMJzWDG3gDGDPbG4dajWPTutmRWQTAqdfUhYy4XqQjfmWRT2mvtPfnGqjMvceQMhjb7GhHv7HTAfv3gAzrZWEpYkdXP7YHgWr2urNE7JAMRpd2CCh3hbJzWZ2twbWAKdtuM2HTjFaysjBDAspcGJCWugzeVBmPgEfRp9MawmCr4Q8yfb4zxdCFNzvTNTxKxs3Jn9ZP2vKNYWawjyx4UEUQvANhNh8Jsgver3PBWGtAW46EnyQEfTNxu8CFzGg2XsrvEYsxQEqsvMfc7KHGw76XRAmupFxXJDNmQeKfGEwZuPyekPRvb9eE8xqYBfsMwGxqwDhafWsktscPPXcurFFetZbNrhNvDDhxsgft8znzaz6g2jFjQpKtVYgkgvWGjFVcMcGq4KYXVKtuHwG76QkMMnygCEM2AKN8nczknAjcDZncHeX6Vbn9yawtVPC8RbdfkYgBsRYJ8MPVgmxrXQRJgfHZExnPFeesFGFrggDw3aFAWmF9TtKaVCv3DqWp6yAHvZqKryzCgUrYPpdmjhKYYFm8u9weqZhanVKuRHcCKx2nPa6PyBsn4FFrhAjU4BNExMUPDyFyZn3TfZ38FxgQ8nhKYBkfYbEksj8eA9bfGgbuzvkfdYU83FBX9Xc2Kqk9YRvEk2Y4zBf3awHD7dPYHGP44JberWYmyAZNQJkKRdFtZgjdEdqDnhjRhx8eA2YYgNyQe4HJZuzNMtTvgVezZUfg2RWzDHBYKrEpte9QPvEMqf9nQCgMka8ezKCWHFueKHXBvNgX6YaxDbNPTxvRkhDbT9M9JC4FZFTFvRXcfHuNaUwWUDaenzrVM3CuZ2Xm6sBGAeExXJECyBHgb7gF4XhJh3ARvMqxPatBE9EyuFzB8rwD7ADFxxVvEDhB9hEgDTXzrcrACnHmtzXjPZUPCjZW7uQmAcHcPjURaQEYQ8VKCVZqJjNbde2k6gBw7syMeaFEMBVeRKxm3gHpvbVzHBYeszfD23PD83Ujrz4WznxJqb37cGMJysnfDf4Rny8URJrxMtwkyAX6xcqYbtBF4ZvvcyBauZUa8KNqCbNNqpfymHvngQfsAURUUQ7JGXts2773A8FkdbnmXxZHG3hhrG2Vdm4vmFVWMwXEydtrDrhEqbFRBZAWwGp3drPczXnD74DTU2s4Cx7Y2ZdxgCtx34uncANgTj3HDe4e8ZCfUHvs72E8TwBVEV8bC47pWJ2MybEBBPMWXvdJzNvapTThdHEssAp8dcK8qrE4FtAsTFUuAV8RGyfTuTacaXKash24hmaUwKPmxW8ynGaF43ZrxHxFtwdQmjbVJhwKh66XTr8aynJdGAy8gdUp8vPcCF6RTgfzTAzUWPvdttTZxrtm44wqpqKHUFFynvcu6G3GY4qf33ZCNAu3JuA39KggkPmMThsQFNhGB779HxwFjYdkYGwwFGR4ZQCnTVCD37jUszGewyzqw7Ecd67WFpXtkpDHXdfcGffB6X8KuwZvV7upjpCnZCFe2JzFA2uhpXha7Wg9dHQbuUFVakuatwsDsZQFZRwFKrqHHVhjyNJtgDMwvu8tnHMyF9wJbuqQcy6wFNFPFxTVsnVNDY9YjKuUfvnQt2Hbyz8HPywh7rcGPPZNeTju3tvssFTGmhFxa6gXPE8cWaq6DpWVaDTNkwkspxu3AGUw6jgpwy4ECBErQgt4MbKaHvaEZWFeM7gr8hUGaBGWWacuF396bBV43KyvaQfpvhCqwHYBvt9fmhcw2y67fTUahqv38wgEUYaggVx4YhfDtWHwXr3TsRQEzuTxs3yAG3YcucGYkEaZCCth4HDvgRXwGPJXcPMADdvg8ZCJNrWqPGRwqNx9qeq37BAqGqdsWXzVPQ4aXsTRNGUMnAarmaPTCAsMj63csTf29QCg442UUU48W936AHgWmKAC4NbyTquPge8XpXYRpG2bYqtzcZbsYJNGB6NfH8bKmUC9h9sECjjfjj6zp9tKnRcV6TwPVmX4KGFrN3wRG3tsnDEX22xbDE9fF3X7BsqFnGbJMeQKhxj3vTRtapwgqmRXpCMeF6XX7kuW4yQavvz4qDkt3wwhvYYUxhr9MmtphsNtUuBxncTt3gkrPTKMewpeUzfhDQCy3b\n tripleo::profile::base::rabbitmq::enable_internal_tls: false\n tripleo::profile::base::snmp::snmpd_password: e0e6f3b1f8575fd51ee080d6b2724feef235ed7e\n tripleo::profile::base::snmp::snmpd_user: ro_snmp_user\n tripleo::profile::base::sshd::bannertext: \'\'\n tripleo::profile::base::sshd::motd: \'\'\n tripleo::profile::base::sshd::options:\n AcceptEnv: [LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES,\n LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT, LC_IDENTIFICATION\n LC_ALL LANGUAGE, XMODIFIERS]\n AuthorizedKeysFile: .ssh/authorized_keys\n ChallengeResponseAuthentication: \'no\'\n GSSAPIAuthentication: \'yes\'\n GSSAPICleanupCredentials: \'no\'\n HostKey: [/etc/ssh/ssh_host_rsa_key, /etc/ssh/ssh_host_ecdsa_key, /etc/ssh/ssh_host_ed25519_key]\n PasswordAuthentication: \'no\'\n Subsystem: sftp /usr/libexec/openssh/sftp-server\n SyslogFacility: AUTHPRIV\n UseDNS: \'no\'\n UsePAM: \'yes\'\n UsePrivilegeSeparation: sandbox\n X11Forwarding: \'yes\'\n tripleo::profile::base::swift::proxy::ceilometer_enabled: false\n tripleo::profile::base::swift::proxy::ceilometer_messaging_use_ssl: \'False\'\n tripleo::profile::base::swift::proxy::rabbit_port: 5672\n tripleo::profile::base::swift::proxy::tls_proxy_bind_ip: storage\n tripleo::profile::base::swift::proxy::tls_proxy_fqdn: \'%{hiera(\'\'fqdn_storage\'\')}\'\n tripleo::profile::base::swift::proxy::tls_proxy_port: \'8080\'\n tripleo::profile::base::swift::ringbuilder::build_ring: true\n tripleo::profile::base::swift::ringbuilder::min_part_hours: 1\n tripleo::profile::base::swift::ringbuilder::part_power: 10\n tripleo::profile::base::swift::ringbuilder::raw_disk_prefix: r1z1-\n tripleo::profile::base::swift::ringbuilder::raw_disks: [\':%PORT%/d1\']\n tripleo::profile::base::swift::ringbuilder::replicas: 3\n tripleo::profile::base::swift::ringbuilder::swift_ring_get_tempurl: https://192.168.24.2:13808/v1/AUTH_aed387cf82184fb788209f67beef84fe/overcloud-swift-rings/swift-rings.tar.gz?temp_url_sig=4e0dc6e89355a285170099963795538fe44f9487&temp_url_expires=1532600992\n tripleo::profile::base::swift::ringbuilder::swift_ring_put_tempurl: https://192.168.24.2:13808/v1/AUTH_aed387cf82184fb788209f67beef84fe/overcloud-swift-rings/swift-rings.tar.gz?temp_url_sig=e22872dbc58b3effeecaad0f803ae39c074aa8bb&temp_url_expires=1532601021\n tripleo::profile::base::swift::ringbuilder:skip_consistency_check: true\n tripleo::profile::base::swift::storage::enable_swift_storage: true\n tripleo::profile::base::swift::storage::use_local_dir: true\n tripleo::profile::base::tuned::profile: \'\'\n tripleo::profile::pacemaker::cinder::volume_bundle::cinder_volume_docker_image: 192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest\n tripleo::profile::pacemaker::cinder::volume_bundle::docker_environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n tripleo::profile::pacemaker::cinder::volume_bundle::docker_volumes: [\'/etc/hosts:/etc/hosts:ro\',\n \'/etc/localtime:/etc/localtime:ro\', \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\',\n \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\', \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\', \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\',\n \'/etc/puppet:/etc/puppet:ro\', \'/var/lib/kolla/config_files/cinder_volume.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro\',\n \'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro\', \'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro\',\n \'/lib/modules:/lib/modules:ro\', \'/dev/:/dev/\', \'/run/:/run/\', \'/sys:/sys\',\n \'/var/lib/cinder:/var/lib/cinder\', \'/var/log/containers/cinder:/var/log/cinder\']\n tripleo::profile::pacemaker::database::mysql::bind_address: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n tripleo::profile::pacemaker::database::mysql::ca_file: /etc/ipa/ca.crt\n tripleo::profile::pacemaker::database::mysql::gmcast_listen_addr: internal_api\n tripleo::profile::pacemaker::database::mysql_bundle::bind_address: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n tripleo::profile::pacemaker::database::mysql_bundle::control_port: 3123\n tripleo::profile::pacemaker::database::mysql_bundle::mysql_docker_image: 192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest\n tripleo::profile::pacemaker::database::redis_bundle::control_port: 3124\n tripleo::profile::pacemaker::database::redis_bundle::redis_docker_image: 192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest\n tripleo::profile::pacemaker::database::redis_bundle::tls_proxy_bind_ip: internal_api\n tripleo::profile::pacemaker::database::redis_bundle::tls_proxy_fqdn: \'%{hiera(\'\'fqdn_internal_api\'\')}\'\n tripleo::profile::pacemaker::database::redis_bundle::tls_proxy_port: 6379\n tripleo::profile::pacemaker::haproxy_bundle::haproxy_docker_image: 192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest\n tripleo::profile::pacemaker::haproxy_bundle::internal_certs_directory: /etc/pki/tls/certs/haproxy\n tripleo::profile::pacemaker::haproxy_bundle::internal_keys_directory: /etc/pki/tls/private/haproxy\n tripleo::profile::pacemaker::haproxy_bundle::tls_mapping: [/etc/ipa/ca.crt,\n /etc/pki/tls/private/haproxy, /etc/pki/tls/certs/haproxy, /etc/pki/tls/private/overcloud_endpoint.pem]\n tripleo::profile::pacemaker::rabbitmq_bundle::control_port: 3122\n tripleo::profile::pacemaker::rabbitmq_bundle::rabbitmq_docker_image: 192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest\n tripleo::stunnel::foreground: \'yes\'\n tripleo::stunnel::manage_service: false\n tripleo::trusted_cas::ca_map: {}\n vswitch::dpdk::driver_type: vfio-pci\n vswitch::dpdk::host_core_list: \'\'\n vswitch::dpdk::memory_channels: \'4\'\n vswitch::dpdk::pmd_core_list: \'\'\n vswitch::dpdk::socket_mem: \'\'\n vswitch::ovs::enable_hw_offload: false\n role_data_monitoring_subscriptions: [overcloud-pacemaker]\n role_data_post_update_tasks:\n - block:\n - name: store update level to update_level variable\n set_fact: {odl_update_level: 1}\n - block:\n - {name: Disable Upgrade Flag via Rest, shell: \'curl -k -v --silent --fail -u\n ODL_USERNAME:redhat -X \\ PUT -d "{ "config": { "upgradeInProgress": false\n } }" \\ -H "Content-Type: application/json" \\ http://:8081/restconf/config/genius-mdsalutil:config\',\n when: step|int == 0}\n - copy: {content: "<config xmlns=\\"urn:opendaylight:params:xml:ns:yang:mdsalutil\\"\\\n >\\n <upgradeInProgress>false</upgradeInProgress>\\n</config>\\n", dest: /var/lib/config-data/puppet-generated/opendaylight/opt/opendaylight/etc/opendaylight/datastore/initial/config/genius-mdsalutil-config.xml,\n group: 42462, mode: 420, owner: 42462}\n name: Disable Upgrade in Config File\n when: step|int == 0\n when: odl_update_level == 2\n - block:\n - {command: systemctl is-active --quiet openvswitch, name: Check service openvswitch\n is running, register: openvswitch_running, tags: common}\n - {name: Delete OVS groups and ports, shell: sudo ovs-ofctl -O Openflow13 del-groups\n br-int; for tun_port in $(ovs-vsctl list-ports br-int | grep \'tun\'); do;\n ovs-vsctl del-port br-int $(tun_port); done;, when: (step|int == 0) and\n (openvswitch_running.rc == 0)}\n - {name: Stop openvswitch service, service: name=openvswitch state=stopped,\n when: (step|int == 1) and (openvswitch_running.rc == 0)}\n - iptables: chain=OUTPUT action=insert protocol=tcp destination_port={{ item\n }} jump=DROP state=absent\n name: Unblock OVS port per compute node.\n when: step|int == 2\n with_items: [6640, 6653, 6633]\n - {name: start openvswitch service, service: name=openvswitch state=started,\n when: step|int == 3}\n when: odl_update_level == 2\n role_data_post_upgrade_tasks:\n - getent: {database: passwd, key: neutron}\n ignore_errors: true\n name: Check for neutron user\n - name: Set neutron_user_avail\n set_fact: {neutron_user_avail: \'{{ getent_passwd is defined }}\'}\n - block:\n - {become: true, name: Ensure read/write access for files created after upgrade,\n shell: \'umask 0002\n\n setfacl -d -R -m u:neutron:rwx /var/lib/neutron\n\n setfacl -R -m u:neutron:rw /var/lib/neutron\n\n find /var/lib/neutron -type d -exec setfacl -m u:neutron:rwx \'\'{}\'\' \\;\n\n \'}\n - become: true\n ignore_errors: true\n name: Provide access for domain sockets\n shell: \'umask 0002\n\n setfacl -m u:neutron:rwx "{{ item }}"\n\n \'\n with_items: [/var/lib/neutron/metadata_proxy, /var/lib/neutron]\n when: [step|int == 2, neutron_user_avail|bool]\n - {name: Disable Upgrade Flag via Rest, shell: \'curl -k -v --silent --fail -u\n ODL_USERNAME:redhat -X \\ PUT -d "{ "config": { "upgradeInProgress": false\n } }" \\ -H "Content-Type: application/json" \\ http://:8081/restconf/config/genius-mdsalutil:config\',\n when: step|int == 0}\n - copy: {content: "<config xmlns=\\"urn:opendaylight:params:xml:ns:yang:mdsalutil\\"\\\n >\\n <upgradeInProgress>false</upgradeInProgress>\\n</config>\\n", dest: /var/lib/config-data/puppet-generated/opendaylight/opt/opendaylight/etc/opendaylight/datastore/initial/config/genius-mdsalutil-config.xml,\n group: 42462, mode: 420, owner: 42462}\n name: Disable Upgrade in Config File\n when: step|int == 0\n - {command: systemctl is-active --quiet openvswitch, name: Check service openvswitch\n is running, register: openvswitch_running, tags: common}\n - {name: Delete OVS groups and ports, shell: sudo ovs-ofctl -O Openflow13 del-groups\n br-int; for tun_port in $(ovs-vsctl list-ports br-int | grep \'tun\'); do; ovs-vsctl\n del-port br-int $(tun_port); done;, when: (step|int == 0) and (openvswitch_running.rc\n == 0)}\n - {name: Stop openvswitch service, service: name=openvswitch state=stopped, when: (step|int\n == 1) and (openvswitch_running.rc == 0)}\n - iptables: chain=OUTPUT action=insert protocol=tcp destination_port={{ item }}\n jump=DROP state=absent\n name: Unblock OVS port per compute node.\n when: step|int == 2\n with_items: [6640, 6653, 6633]\n - {name: start openvswitch service, service: name=openvswitch state=started, when: step|int\n == 3}\n role_data_pre_upgrade_rolling_tasks: []\n role_data_puppet_config:\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-07-13.1\',\n config_volume: aodh, puppet_tags: \'aodh_api_paste_ini,aodh_config\', step_config: \'include\n tripleo::profile::base::aodh::api\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-07-13.1\',\n config_volume: aodh, puppet_tags: aodh_config, step_config: \'include tripleo::profile::base::aodh::evaluator\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-07-13.1\',\n config_volume: aodh, puppet_tags: aodh_config, step_config: \'include tripleo::profile::base::aodh::listener\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-07-13.1\',\n config_volume: aodh, puppet_tags: aodh_config, step_config: \'include tripleo::profile::base::aodh::notifier\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-07-13.1\',\n config_volume: ceilometer, puppet_tags: ceilometer_config, step_config: \'include\n ::tripleo::profile::base::ceilometer::agent::polling\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-07-13.1\',\n config_volume: ceilometer, puppet_tags: ceilometer_config, step_config: \'include\n ::tripleo::profile::base::ceilometer::agent::notification\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-07-13.1\',\n config_volume: cinder, puppet_tags: \'cinder_config,file,concat,file_line\', step_config: \'include\n ::tripleo::profile::base::cinder::api\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-07-13.1\',\n config_volume: cinder, puppet_tags: \'cinder_config,file,concat,file_line\', step_config: \'include\n ::tripleo::profile::base::cinder::scheduler\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-07-13.1\',\n config_volume: cinder, puppet_tags: \'cinder_config,file,concat,file_line\', step_config: \'include\n ::tripleo::profile::base::lvm\n\n include ::tripleo::profile::base::cinder::volume\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\', config_volume: clustercheck,\n puppet_tags: file, step_config: \'include ::tripleo::profile::pacemaker::clustercheck\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-glance-api:2018-07-13.1\',\n config_volume: glance_api, puppet_tags: \'glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config\',\n step_config: \'include ::tripleo::profile::base::glance::api\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-07-13.1\',\n config_volume: gnocchi, puppet_tags: \'gnocchi_api_paste_ini,gnocchi_config\',\n step_config: \'include ::tripleo::profile::base::gnocchi::api\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-07-13.1\',\n config_volume: gnocchi, puppet_tags: gnocchi_config, step_config: \'include ::tripleo::profile::base::gnocchi::metricd\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-07-13.1\',\n config_volume: gnocchi, puppet_tags: gnocchi_config, step_config: \'include ::tripleo::profile::base::gnocchi::statsd\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - config_image: 192.168.24.1:8787/rhosp13/openstack-haproxy:2018-07-13.1\n config_volume: haproxy\n puppet_tags: haproxy_config\n step_config: \'exec {\'\'wait-for-settle\'\': command => \'\'/bin/true\'\' }\n\n class tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef,\n $dport = undef, $sport = undef, $proto = undef, $action = undef, $state =\n undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef,\n $extras = undef){}\n\n [\'\'pcmk_bundle\'\', \'\'pcmk_resource\'\', \'\'pcmk_property\'\', \'\'pcmk_constraint\'\',\n \'\'pcmk_resource_default\'\'].each |String $val| { noop_resource($val) }\n\n include ::tripleo::profile::pacemaker::haproxy_bundle\'\n volumes: [\'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro\', \'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro\',\n \'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro\', \'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro\']\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-heat-api:2018-07-13.1\',\n config_volume: heat_api, puppet_tags: \'heat_config,file,concat,file_line\', step_config: \'include\n ::tripleo::profile::base::heat::api\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-heat-api-cfn:2018-07-13.1\',\n config_volume: heat_api_cfn, puppet_tags: \'heat_config,file,concat,file_line\',\n step_config: \'include ::tripleo::profile::base::heat::api_cfn\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-heat-api:2018-07-13.1\',\n config_volume: heat, puppet_tags: \'heat_config,file,concat,file_line\', step_config: \'include\n ::tripleo::profile::base::heat::engine\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-horizon:2018-07-13.1\', config_volume: horizon,\n puppet_tags: horizon_config, step_config: \'include ::tripleo::profile::base::horizon\n\n \'}\n - config_image: 192.168.24.1:8787/rhosp13/openstack-iscsid:2018-07-13.1\n config_volume: iscsid\n puppet_tags: iscsid_config\n step_config: include ::tripleo::profile::base::iscsid\n volumes: [\'/etc/iscsi:/etc/iscsi\']\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-keystone:2018-07-13.1\',\n config_volume: keystone, puppet_tags: \'keystone_config,keystone_domain_config\',\n step_config: \'[\'\'Keystone_user\'\', \'\'Keystone_endpoint\'\', \'\'Keystone_domain\'\',\n \'\'Keystone_tenant\'\', \'\'Keystone_user_role\'\', \'\'Keystone_role\'\', \'\'Keystone_service\'\'].each\n |String $val| { noop_resource($val) }\n\n include ::tripleo::profile::base::keystone\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-memcached:2018-07-13.1\',\n config_volume: memcached, puppet_tags: file, step_config: \'include ::tripleo::profile::base::memcached\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\', config_volume: mysql,\n puppet_tags: file, step_config: \'[\'\'Mysql_datadir\'\', \'\'Mysql_user\'\', \'\'Mysql_database\'\',\n \'\'Mysql_grant\'\', \'\'Mysql_plugin\'\'].each |String $val| { noop_resource($val)\n }\n\n exec {\'\'wait-for-settle\'\': command => \'\'/bin/true\'\' }\n\n include ::tripleo::profile::pacemaker::database::mysql_bundle\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-neutron-server-opendaylight:2018-07-13.1\',\n config_volume: neutron, puppet_tags: \'neutron_config,neutron_api_config\', step_config: \'include\n tripleo::profile::base::neutron::server\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-neutron-server-opendaylight:2018-07-13.1\',\n config_volume: neutron, puppet_tags: neutron_plugin_ml2, step_config: \'include\n ::tripleo::profile::base::neutron::plugins::ml2\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-neutron-server-opendaylight:2018-07-13.1\',\n config_volume: neutron, puppet_tags: \'neutron_config,neutron_dhcp_agent_config\',\n step_config: \'include tripleo::profile::base::neutron::dhcp\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-neutron-server-opendaylight:2018-07-13.1\',\n config_volume: neutron, puppet_tags: \'neutron_config,neutron_metadata_agent_config\',\n step_config: \'include tripleo::profile::base::neutron::metadata\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\',\n config_volume: nova, puppet_tags: nova_config, step_config: \'[\'\'Nova_cell_v2\'\'].each\n |String $val| { noop_resource($val) }\n\n include tripleo::profile::base::nova::api\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\',\n config_volume: nova, puppet_tags: nova_config, step_config: \'include tripleo::profile::base::nova::conductor\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\',\n config_volume: nova, puppet_tags: nova_config, step_config: \'include tripleo::profile::base::nova::consoleauth\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\',\n config_volume: nova, puppet_tags: nova_config, step_config: \'\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-placement-api:2018-07-13.1\',\n config_volume: nova_placement, puppet_tags: nova_config, step_config: \'include\n tripleo::profile::base::nova::placement\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\',\n config_volume: nova, puppet_tags: nova_config, step_config: \'include tripleo::profile::base::nova::scheduler\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-07-13.1\',\n config_volume: nova, puppet_tags: nova_config, step_config: \'include tripleo::profile::base::nova::vncproxy\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-cron:2018-07-13.1\', config_volume: crond,\n step_config: \'include ::tripleo::profile::base::logging::logrotate\'}\n - config_image: 192.168.24.1:8787/rhosp13/openstack-opendaylight:2018-07-13.1\n config_volume: opendaylight\n puppet_tags: odl_user,odl_keystore\n step_config: \'include tripleo::profile::base::neutron::opendaylight\n\n \'\n volumes: []\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-panko-api:2018-07-13.1\',\n config_volume: panko, puppet_tags: \'panko_api_paste_ini,panko_config\', step_config: \'include\n tripleo::profile::base::panko::api\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-07-13.1\',\n config_volume: rabbitmq, puppet_tags: file, step_config: \'[\'\'Rabbitmq_policy\'\',\n \'\'Rabbitmq_user\'\'].each |String $val| { noop_resource($val) }\n\n include ::tripleo::profile::base::rabbitmq\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-redis:2018-07-13.1\', config_volume: redis,\n puppet_tags: exec, step_config: \'include ::tripleo::profile::pacemaker::database::redis_bundle\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-07-13.1\',\n config_volume: swift, puppet_tags: \'swift_config,swift_proxy_config,swift_keymaster_config\',\n step_config: \'include ::tripleo::profile::base::swift::proxy\n\n \'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-07-13.1\',\n config_volume: swift_ringbuilder, puppet_tags: \'exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball\',\n step_config: \'include ::tripleo::profile::base::swift::ringbuilder\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-07-13.1\',\n config_volume: swift, puppet_tags: \'swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server\',\n step_config: \'include ::tripleo::profile::base::swift::storage\n\n\n class xinetd() {}\'}\n role_data_service_config_settings: {}\n role_data_service_metadata_settings: null\n role_data_service_names: [aodh_api, aodh_evaluator, aodh_listener, aodh_notifier,\n ca_certs, ceilometer_api_disabled, ceilometer_collector_disabled, ceilometer_expirer_disabled,\n ceilometer_agent_central, ceilometer_agent_notification, cinder_api, cinder_scheduler,\n cinder_volume, clustercheck, docker, glance_api, glance_registry_disabled, gnocchi_api,\n gnocchi_metricd, gnocchi_statsd, haproxy, heat_api, heat_api_cloudwatch_disabled,\n heat_api_cfn, heat_engine, horizon, iscsid, kernel, keystone, memcached, mongodb_disabled,\n mysql, mysql_client, neutron_api, neutron_plugin_ml2_odl, neutron_dhcp, neutron_metadata,\n nova_api, nova_conductor, nova_consoleauth, nova_metadata, nova_placement, nova_scheduler,\n nova_vnc_proxy, ntp, logrotate_crond, opendaylight_api, opendaylight_ovs, pacemaker,\n panko_api, rabbitmq, redis, snmp, sshd, swift_proxy, swift_ringbuilder, swift_storage,\n timezone, tripleo_firewall, tripleo_packages, tuned]\n role_data_step_config: "# Copyright 2014 Red Hat, Inc.\\n# All Rights Reserved.\\n\\\n #\\n# Licensed under the Apache License, Version 2.0 (the \\"License\\"); you may\\n\\\n # not use this file except in compliance with the License. You may obtain\\n\\\n # a copy of the License at\\n#\\n# http://www.apache.org/licenses/LICENSE-2.0\\n\\\n #\\n# Unless required by applicable law or agreed to in writing, software\\n#\\\n \\ distributed under the License is distributed on an \\"AS IS\\" BASIS, WITHOUT\\n\\\n # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\\n\\\n # License for the specific language governing permissions and limitations\\n\\\n # under the License.\\n\\n# Common config, from tripleo-heat-templates/puppet/manifests/overcloud_common.pp\\n\\\n # The content of this file will be used to generate\\n# the puppet manifests\\\n \\ for all roles, the placeholder\\n# Controller will be replaced by \'controller\',\\\n \\ \'blockstorage\',\\n# \'cephstorage\' and all the deployed roles.\\n\\nif hiera(\'step\')\\\n \\ >= 4 {\\n hiera_include(\'Controller_classes\', [])\\n}\\n\\n$package_manifest_name\\\n \\ = join([\'/var/lib/tripleo/installed-packages/overcloud_Controller\', hiera(\'step\')])\\n\\\n package_manifest{$package_manifest_name: ensure => present}\\n\\n# End of overcloud_common.pp\\n\\\n \\ninclude ::tripleo::trusted_cas\\ninclude ::tripleo::profile::base::docker\\n\\\n \\ninclude ::tripleo::profile::base::kernel\\ninclude ::tripleo::profile::base::database::mysql::client\\n\\\n include ::tripleo::profile::base::time::ntp\\ninclude tripleo::profile::base::neutron::plugins::ovs::opendaylight\\n\\\n \\ninclude ::tripleo::profile::base::pacemaker\\n\\ninclude ::tripleo::profile::base::snmp\\n\\\n \\ninclude ::tripleo::profile::base::sshd\\n\\ninclude ::timezone\\ninclude ::tripleo::firewall\\n\\\n \\ninclude ::tripleo::packages\\n\\ninclude ::tripleo::profile::base::tuned"\n role_data_update_tasks:\n - block:\n - name: Get docker Cinder-Volume image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest\'}\n - {name: Get previous Cinder-Volume image id, register: cinder_volume_image_id,\n shell: \'docker images | awk \'\'/cinder-volume.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Cinder-Volume image, register: cinder_volume_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{cinder_volume_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Cinder-Volume image, shell: \'docker\n rm -fv {{item}}\', with_items: \'{{ cinder_volume_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Cinder-Volume images, shell: \'docker rmi -f {{cinder_volume_image_id.stdout}}\'}\n when: [cinder_volume_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Cinder-Volume\n images}\n - {name: Retag pcmklatest to latest Cinder-Volume image, shell: \'docker tag\n {{docker_image}} {{docker_image_latest}}\'}\n name: Cinder-Volume fetch and retag container image for pacemaker\n when: step|int == 2\n - block:\n - {failed_when: false, name: Detect if puppet on the docker profile would restart\n the service, register: puppet_docker_noop_output, shell: "puppet apply --noop\\\n \\ --summarize --detailed-exitcodes --verbose \\\\\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules\\\n \\ \\\\\\n --color=false -e \\"class { \'tripleo::profile::base::docker\': step\\\n \\ => 1, }\\" 2>&1 | \\\\\\nawk -F \\":\\" \'/Out of sync:/ { print $2}\'\\n"}\n - {changed_when: docker_check_update.rc == 100, failed_when: \'docker_check_update.rc\n not in [0, 100]\', name: Is docker going to be updated, register: docker_check_update,\n shell: yum check-update docker}\n - {name: Set docker_rpm_needs_update fact, set_fact: \'docker_rpm_needs_update={{\n docker_check_update.rc == 100 }}\'}\n - {name: Set puppet_docker_is_outofsync fact, set_fact: \'puppet_docker_is_outofsync={{\n puppet_docker_noop_output.stdout|trim|int >= 1 }}\'}\n - {name: Stop all containers, shell: docker ps -q | xargs --no-run-if-empty\n -n1 docker stop, when: puppet_docker_is_outofsync or docker_rpm_needs_update}\n - name: Stop docker\n service: {name: docker, state: stopped}\n when: puppet_docker_is_outofsync or docker_rpm_needs_update\n - {name: Update the docker package, when: docker_rpm_needs_update, yum: name=docker\n state=latest update_cache=yes}\n - {changed_when: puppet_docker_apply.rc == 2, failed_when: \'puppet_docker_apply.rc\n not in [0, 2]\', name: Apply puppet which will start the service again, register: puppet_docker_apply,\n shell: "puppet apply --detailed-exitcodes --verbose \\\\\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules\\\n \\ \\\\\\n -e \\"class { \'tripleo::profile::base::docker\': step => 1, }\\"\\n"}\n when: step|int == 2\n - block:\n - name: Check for haproxy Kolla configuration\n register: haproxy_kolla_config\n stat: {path: /var/lib/config-data/puppet-generated/haproxy}\n - name: Check if haproxy is already containerized\n set_fact: {haproxy_containerized: \'{{haproxy_kolla_config.stat.isdir | default(false)}}\'}\n - {command: hiera -c /etc/puppet/hiera.yaml bootstrap_nodeid, name: get bootstrap\n nodeid, register: bootstrap_node, tags: common}\n - {name: set is_bootstrap_node fact, set_fact: \'is_bootstrap_node={{bootstrap_node.stdout|lower\n == ansible_hostname|lower}}\', tags: common}\n name: Set HAProxy upgrade facts\n - block:\n - {command: \'cibadmin --query --xpath "//storage-mapping[@id=\'\'haproxy-cert\'\']"\',\n ignore_errors: true, name: Check haproxy public certificate configuration\n in pacemaker, register: haproxy_cert_mounted}\n - name: Disable the haproxy cluster resource\n pacemaker_resource: {resource: haproxy-bundle, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n when: haproxy_cert_mounted.rc == 6\n - name: Set HAProxy public cert volume mount fact\n set_fact: {haproxy_public_cert_path: /etc/pki/tls/private/overcloud_endpoint.pem,\n haproxy_public_tls_enabled: false}\n - {command: \'pcs resource bundle update haproxy-bundle storage-map add id=haproxy-cert\n source-dir={{ haproxy_public_cert_path }} target-dir=/var/lib/kolla/config_files/src-tls/{{\n haproxy_public_cert_path }} options=ro\', name: Add a bind mount for public\n certificate in the haproxy bundle, when: haproxy_cert_mounted.rc == 6 and\n haproxy_public_tls_enabled|bool}\n - name: Enable the haproxy cluster resource\n pacemaker_resource: {resource: haproxy-bundle, state: enable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n when: haproxy_cert_mounted.rc == 6\n name: Mount TLS cert if needed\n when: [step|int == 1, haproxy_containerized|bool, is_bootstrap_node]\n - block:\n - name: Get docker Haproxy image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-haproxy:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest\'}\n - {name: Get previous Haproxy image id, register: haproxy_image_id, shell: \'docker\n images | awk \'\'/haproxy.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Haproxy image, register: haproxy_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{haproxy_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Haproxy image, shell: \'docker\n rm -fv {{item}}\', with_items: \'{{ haproxy_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Haproxy images, shell: \'docker rmi -f {{haproxy_image_id.stdout}}\'}\n when: [haproxy_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Haproxy images}\n - {name: Retag pcmklatest to latest Haproxy image, shell: \'docker tag {{docker_image}}\n {{docker_image_latest}}\'}\n name: Haproxy fetch and retag container image for pacemaker\n when: step|int == 2\n - block:\n - name: Get docker Mariadb image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest\'}\n - {name: Get previous Mariadb image id, register: mariadb_image_id, shell: \'docker\n images | awk \'\'/mariadb.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Mariadb image, register: mariadb_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{mariadb_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Mariadb image, shell: \'docker\n rm -fv {{item}}\', with_items: \'{{ mariadb_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Mariadb images, shell: \'docker rmi -f {{mariadb_image_id.stdout}}\'}\n when: [mariadb_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Mariadb images}\n - {name: Retag pcmklatest to latest Mariadb image, shell: \'docker tag {{docker_image}}\n {{docker_image_latest}}\'}\n name: Mariadb fetch and retag container image for pacemaker\n when: step|int == 2\n - block:\n - name: store update level to update_level variable\n set_fact: {odl_update_level: 1}\n name: Get ODL update level\n - block:\n - {failed_when: false, name: Check if ODL container is present, register: opendaylight_api_container_present,\n shell: \'docker ps -a --format \'\'{{ \'\'{{\'\' }}.Names{{ \'\'}}\'\' }}\'\' | grep \'\'^opendaylight_api$\'\'\'}\n - {name: Update ODL container restart policy to unless-stopped, shell: docker\n update --restart=unless-stopped opendaylight_api, when: opendaylight_api_container_present.rc\n == 0}\n - docker_container: {name: opendaylight_api, state: stopped}\n name: Stop previous ODL container\n - file: {path: /var/lib/opendaylight/data/cache, state: absent}\n name: Delete cache folder\n name: Stop ODL container and remove cache\n when: [step|int == 0, odl_update_level == 1]\n - block:\n - {failed_when: false, name: Check if ODL container is present, register: opendaylight_api_container_present,\n shell: \'docker ps -a --format \'\'{{ \'\'{{\'\' }}.Names{{ \'\'}}\'\' }}\'\' | grep \'\'^opendaylight_api$\'\'\'}\n - {name: Update ODL container restart policy to unless-stopped, shell: docker\n update --restart=unless-stopped opendaylight_api, when: opendaylight_api_container_present.rc\n == 0}\n - docker_container: {name: opendaylight_api, state: stopped}\n name: stop previous ODL container\n when: step|int == 0\n - file: {path: \'/var/lib/opendaylight/{{item}}\', state: absent}\n name: remove data, journal and snapshots\n when: step|int == 0\n with_items: [snapshots, journal, data]\n - copy: {content: "<config xmlns=\\"urn:opendaylight:params:xml:ns:yang:mdsalutil\\"\\\n >\\n <upgradeInProgress>true</upgradeInProgress>\\n</config>\\n", dest: /var/lib/config-data/puppet-generated/opendaylight/opt/opendaylight/etc/opendaylight/datastore/initial/config/genius-mdsalutil-config.xml,\n group: 42462, mode: 420, owner: 42462}\n name: Set ODL upgrade flag to True\n when: step|int == 1\n name: Run L2 update tasks that are similar to upgrade_tasks when update level\n is 2\n when: odl_update_level == 2\n - block:\n - iptables: chain=OUTPUT action=insert protocol=tcp destination_port={{ item\n }} jump=DROP\n name: Block connections to ODL.\n when: step|int == 0\n with_items: [6640, 6653, 6633]\n name: Run L2 update tasks that are similar to upgrade_tasks when update level\n is 2\n when: odl_update_level == 2\n - {async: 30, name: Check pacemaker cluster running before the minor update, pacemaker_cluster: state=online\n check_and_fail=true, poll: 4, when: step|int == 0}\n - {name: Stop pacemaker cluster, pacemaker_cluster: state=offline, when: step|int\n == 1}\n - {name: Start pacemaker cluster, pacemaker_cluster: state=online, when: step|int\n == 4}\n - block:\n - name: Get docker Rabbitmq image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest\'}\n - {name: Get previous Rabbitmq image id, register: rabbitmq_image_id, shell: \'docker\n images | awk \'\'/rabbitmq.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Rabbitmq image, register: rabbitmq_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{rabbitmq_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Rabbitmq image, shell: \'docker\n rm -fv {{item}}\', with_items: \'{{ rabbitmq_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Rabbitmq images, shell: \'docker rmi -f {{rabbitmq_image_id.stdout}}\'}\n when: [rabbitmq_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Rabbitmq images}\n - {name: Retag pcmklatest to latest Rabbitmq image, shell: \'docker tag {{docker_image}}\n {{docker_image_latest}}\'}\n name: Rabbit fetch and retag container image for pacemaker\n when: step|int == 2\n - block:\n - name: Get docker Redis image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-redis:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest\'}\n - {name: Get previous Redis image id, register: redis_image_id, shell: \'docker\n images | awk \'\'/redis.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Redis image, register: redis_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{redis_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Redis image, shell: \'docker rm\n -fv {{item}}\', with_items: \'{{ redis_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Redis images, shell: \'docker rmi -f {{redis_image_id.stdout}}\'}\n when: [redis_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Redis images}\n - {name: Retag pcmklatest to latest Redis image, shell: \'docker tag {{docker_image}}\n {{docker_image_latest}}\'}\n name: Redis fetch and retag container image for pacemaker\n when: step|int == 2\n - file: {path: /var/run/rsyncd.pid, state: absent}\n name: Ensure rsyncd pid file is absent\n - {name: Check for existing yum.pid, register: yum_pid_file, stat: path=/var/run/yum.pid,\n when: step|int == 0 or step|int == 3}\n - {fail: msg="ERROR existing yum.pid detected - can\'t continue! Please ensure\n there is no other package update process for the duration of the minor update\n worfklow. Exiting.", name: Exit if existing yum process, when: (step|int ==\n 0 or step|int == 3) and yum_pid_file.stat.exists}\n - {name: Update all packages, when: step == "3", yum: name=* state=latest update_cache=yes}\n role_data_upgrade_batch_tasks: []\n role_data_upgrade_tasks:\n - {ignore_errors: true, name: Check for aodh api service running under apache,\n register: httpd_enabled, shell: httpd -t -D DUMP_VHOSTS | grep -q aodh, tags: common}\n - {command: systemctl is-active --quiet httpd, ignore_errors: true, name: Check\n if httpd is running, register: httpd_running, tags: common}\n - name: \'PreUpgrade step0,validation: Check if aodh api is running\'\n shell: systemctl status \'httpd\' | grep -q aodh\n tags: validation\n when: [step|int == 0, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - name: Stop and disable aodh service (running under httpd)\n service: name=httpd state=stopped enabled=no\n when: [step|int == 2, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - name: Set fact for removal of openstack-aodh-api package\n set_fact: {remove_aodh_api_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-aodh-api package if operator requests it\n when: [step|int == 2, remove_aodh_api_package|bool]\n yum: name=openstack-aodh-api state=removed\n - {command: systemctl is-enabled --quiet openstack-aodh-evaluator, ignore_errors: true,\n name: Check if aodh_evaluator is deployed, register: aodh_evaluator_enabled,\n tags: common}\n - command: systemctl is-active --quiet openstack-aodh-evaluator\n name: \'PreUpgrade step0,validation: Check service openstack-aodh-evaluator is\n running\'\n tags: validation\n when: [step|int == 0, aodh_evaluator_enabled.rc == 0]\n - name: Stop and disable openstack-aodh-evaluator service\n service: name=openstack-aodh-evaluator.service state=stopped enabled=no\n when: [step|int == 2, aodh_evaluator_enabled.rc == 0]\n - name: Set fact for removal of openstack-aodh-evaluator package\n set_fact: {remove_aodh_evaluator_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-aodh-evaluator package if operator requests it\n when: [step|int == 2, remove_aodh_evaluator_package|bool]\n yum: name=openstack-aodh-evaluator state=removed\n - {command: systemctl is-enabled --quiet openstack-aodh-listener, ignore_errors: true,\n name: Check if aodh_listener is deployed, register: aodh_listener_enabled, tags: common}\n - command: systemctl is-active --quiet openstack-aodh-listener\n name: \'PreUpgrade step0,validation: Check service openstack-aodh-listener is\n running\'\n tags: validation\n when: [step|int == 0, aodh_listener_enabled.rc == 0]\n - name: Stop and disable openstack-aodh-listener service\n service: name=openstack-aodh-listener.service state=stopped enabled=no\n when: [step|int == 2, aodh_listener_enabled.rc == 0]\n - name: Set fact for removal of openstack-aodh-listener package\n set_fact: {remove_aodh_listener_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-aodh-listener package if operator requests it\n when: [step|int == 2, remove_aodh_listener_package|bool]\n yum: name=openstack-aodh-listener state=removed\n - {command: systemctl is-enabled --quiet openstack-aodh-notifier, ignore_errors: true,\n name: Check if aodh_notifier is deployed, register: aodh_notifier_enabled, tags: common}\n - command: systemctl is-active --quiet openstack-aodh-notifier\n name: \'PreUpgrade step0,validation: Check service openstack-aodh-notifier is\n running\'\n tags: validation\n when: [step|int == 0, aodh_notifier_enabled.rc == 0]\n - name: Stop and disable openstack-aodh-notifier service\n service: name=openstack-aodh-notifier.service state=stopped enabled=no\n when: [step|int == 2, aodh_notifier_enabled.rc == 0]\n - name: Set fact for removal of openstack-aodh-notifier package\n set_fact: {remove_aodh_notifier_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-aodh-notifier package if operator requests it\n when: [step|int == 2, remove_aodh_notifier_package|bool]\n yum: name=openstack-aodh-notifier state=removed\n - {command: systemctl is-enabled --quiet openstack-ceilometer-central, ignore_errors: true,\n name: Check if ceilometer_agent_central is deployed, register: ceilometer_agent_central_enabled,\n tags: common}\n - command: systemctl is-active --quiet openstack-ceilometer-central\n name: \'PreUpgrade step0,validation: Check service openstack-ceilometer-central\n is running\'\n tags: validation\n when: [step|int == 0, ceilometer_agent_central_enabled.rc == 0]\n - name: Stop and disable ceilometer agent central service\n service: name=openstack-ceilometer-central state=stopped enabled=no\n when: [step|int == 2, ceilometer_agent_central_enabled.rc == 0]\n - name: Set fact for removal of openstack-ceilometer-central package\n set_fact: {remove_ceilometer_central_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-ceilometer-central package if operator requests it\n when: [step|int == 2, remove_ceilometer_central_package|bool]\n yum: name=openstack-ceilometer-central state=removed\n - {command: systemctl is-enabled --quiet openstack-ceilometer-notification, ignore_errors: true,\n name: Check if ceilometer_agent_notification is deployed, register: ceilometer_agent_notification_enabled,\n tags: common}\n - command: systemctl is-active --quiet openstack-ceilometer-notification\n name: \'PreUpgrade step0,validation: Check service openstack-ceilometer-notification\n is running\'\n tags: validation\n when: [step|int == 0, ceilometer_agent_notification_enabled.rc == 0]\n - name: Stop and disable ceilometer agent notification service\n service: name=openstack-ceilometer-notification state=stopped enabled=no\n when: [step|int == 2, ceilometer_agent_notification_enabled.rc == 0]\n - name: Set fact for removal of openstack-ceilometer-notification package\n set_fact: {remove_ceilometer_notification_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-ceilometer-notification package if operator requests\n it\n when: [step|int == 2, remove_ceilometer_notification_package|bool]\n yum: name=openstack-ceilometer-notification state=removed\n - {command: systemctl is-enabled openstack-cinder-api, ignore_errors: true, name: Check\n is cinder_api is deployed, register: cinder_api_enabled, tags: common}\n - name: \'PreUpgrade step0,validation: Check service openstack-cinder-api is running\'\n shell: systemctl is-active --quiet openstack-cinder-api\n tags: validation\n when: [step|int == 0, cinder_api_enabled.rc == 0]\n - name: Stop and disable cinder_api service (pre-upgrade not under httpd)\n service: name=openstack-cinder-api state=stopped enabled=no\n when: [step|int == 2, cinder_api_enabled.rc == 0]\n - {ignore_errors: true, name: check for cinder_api running under apache (post\n upgrade), register: cinder_api_apache, shell: httpd -t -D DUMP_VHOSTS | grep\n -q cinder, when: step|int == 2}\n - name: Stop and disable cinder_api service\n service: name=httpd state=stopped enabled=no\n when: [step|int == 2, cinder_api_apache.rc == 0]\n - file: {path: /var/spool/cron/cinder, state: absent}\n name: remove old cinder cron jobs\n when: step|int == 2\n - name: Set fact for removal of httpd package\n set_fact: {remove_httpd_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove httpd package if operator requests it\n when: [step|int == 2, remove_httpd_package|bool]\n yum: name=httpd state=removed\n - {command: systemctl is-enabled openstack-cinder-scheduler, ignore_errors: true,\n name: Check if cinder_scheduler is deployed, register: cinder_scheduler_enabled,\n tags: common}\n - name: \'PreUpgrade step0,validation: Check service openstack-cinder-scheduler\n is running\'\n shell: systemctl is-active --quiet openstack-cinder-scheduler\n tags: validation\n when: [step|int == 0, cinder_scheduler_enabled.rc == 0]\n - name: Stop and disable cinder_scheduler service\n service: name=openstack-cinder-scheduler state=stopped enabled=no\n when: [step|int == 2, cinder_scheduler_enabled.rc == 0]\n - name: Set fact for removal of openstack-cinder package\n set_fact: {remove_cinder_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-cinder package if operator requests it\n when: [step|int == 2, remove_cinder_package|bool]\n yum: name=openstack-cinder state=removed\n - name: Get docker Cinder-Volume image\n set_fact: {cinder_volume_docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest\'}\n - {changed_when: false, command: \'grep \'\'^volume_driver[ \\t]*=\'\' /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf\',\n ignore_errors: true, name: Check for Cinder-Volume Kolla configuration, register: cinder_volume_kolla_config}\n - name: Check if Cinder-Volume is already containerized\n set_fact: {cinder_volume_containerized: \'{{cinder_volume_kolla_config|succeeded}}\'}\n - block:\n - {command: hiera -c /etc/puppet/hiera.yaml bootstrap_nodeid, name: get bootstrap\n nodeid, register: bootstrap_node, tags: common}\n - {name: set is_bootstrap_node fact, set_fact: \'is_bootstrap_node={{bootstrap_node.stdout|lower\n == ansible_hostname|lower}}\', tags: common}\n - ignore_errors: true\n name: Check cluster resource status\n pacemaker_resource: {check_mode: false, resource: openstack-cinder-volume,\n state: show}\n register: cinder_volume_res\n - block:\n - name: Disable the openstack-cinder-volume cluster resource\n pacemaker_resource: {resource: openstack-cinder-volume, state: disable,\n wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - name: Delete the stopped openstack-cinder-volume cluster resource.\n pacemaker_resource: {resource: openstack-cinder-volume, state: delete, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n when: (is_bootstrap_node) and (cinder_volume_res|succeeded)\n - {name: Disable cinder_volume service from boot, service: name=openstack-cinder-volume\n enabled=no}\n name: Cinder-Volume baremetal to container upgrade tasks\n when: [step|int == 1, not cinder_volume_containerized|bool]\n - block:\n - {name: Get cinder_volume image id currently used by pacemaker, register: cinder_volume_current_pcmklatest_id,\n shell: \'docker images | awk \'\'/cinder-volume.* pcmklatest/{print $3}\'\' | uniq\'}\n - {name: Temporarily tag the current cinder_volume image id with the upgraded\n image name, shell: \'docker tag {{cinder_volume_current_pcmklatest_id.stdout}}\n {{cinder_volume_docker_image_latest}}\'}\n name: Prepare the switch to new cinder_volume container image name in pacemaker\n when: [step|int == 0, cinder_volume_containerized|bool]\n - ignore_errors: true\n name: Check openstack-cinder-volume cluster resource status\n pacemaker_resource: {check_mode: false, resource: openstack-cinder-volume, state: show}\n register: cinder_volume_pcs_res\n - block:\n - name: Disable the cinder_volume cluster resource before container upgrade\n pacemaker_resource: {resource: openstack-cinder-volume, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - {command: \'pcs resource bundle update openstack-cinder-volume container image={{cinder_volume_docker_image_latest}}\',\n name: pcs resource bundle update cinder_volume for new container image name}\n - name: Enable the cinder_volume cluster resource\n pacemaker_resource: {resource: openstack-cinder-volume, state: enable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n when: null\n name: Update cinder_volume pcs resource bundle for new container image\n when: [step|int == 1, cinder_volume_containerized|bool, is_bootstrap_node, cinder_volume_pcs_res|succeeded]\n - block:\n - name: Get docker Cinder-Volume image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest\'}\n - {name: Get previous Cinder-Volume image id, register: cinder_volume_image_id,\n shell: \'docker images | awk \'\'/cinder-volume.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Cinder-Volume image, register: cinder_volume_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{cinder_volume_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Cinder-Volume image, shell: \'docker\n rm -fv {{item}}\', with_items: \'{{ cinder_volume_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Cinder-Volume images, shell: \'docker rmi -f {{cinder_volume_image_id.stdout}}\'}\n when: [cinder_volume_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Cinder-Volume\n images}\n - {name: Retag pcmklatest to latest Cinder-Volume image, shell: \'docker tag\n {{docker_image}} {{docker_image_latest}}\'}\n name: Retag the pacemaker image if containerized\n when: [step|int == 3, cinder_volume_containerized|bool]\n - {name: Install docker packages on upgrade if missing, when: step|int == 3, yum: name=docker\n state=latest}\n - {command: systemctl is-enabled --quiet openstack-glance-api, ignore_errors: true,\n name: Check if glance_api is deployed, register: glance_api_enabled, tags: common}\n - command: systemctl is-active --quiet openstack-glance-api\n name: \'PreUpgrade step0,validation: Check service openstack-glance-api is running\'\n tags: validation\n when: [step|int == 0, glance_api_enabled.rc == 0]\n - name: Stop and disable glance_api service\n service: name=openstack-glance-api state=stopped enabled=no\n when: [step|int == 2, glance_api_enabled.rc == 0]\n - name: Set fact for removal of openstack-glance package\n set_fact: {remove_glance_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-glance package if operator requests it\n when: [step|int == 2, remove_glance_package|bool]\n yum: name=openstack-glance state=removed\n - {name: Stop and disable glance_registry service on upgrade, service: name=openstack-glance-registry\n state=stopped enabled=no, when: step|int == 1}\n - {command: systemctl is-enabled --quiet openstack-gnocchi-api, ignore_errors: true,\n name: Check if gnocchi_api is deployed, register: gnocchi_api_enabled, tags: common}\n - {ignore_errors: true, name: Check for gnocchi_api running under apache, register: httpd_enabled,\n shell: httpd -t -D DUMP_VHOSTS | grep -q gnocchi, tags: common}\n - command: systemctl is-active --quiet openstack-gnocchi-api\n name: \'PreUpgrade step0,validation: Check service openstack-gnocchi-api is running\'\n tags: validation\n when: [step|int == 0, gnocchi_api_enabled.rc == 0, httpd_enabled.rc != 0]\n - name: Stop and disable gnocchi_api service\n service: name=openstack-gnocchi-api state=stopped enabled=no\n when: [step|int == 2, gnocchi_api_enabled.rc == 0, httpd_enabled.rc != 0]\n - {command: systemctl is-active --quiet httpd, ignore_errors: true, name: Check\n if httpd service is running, register: httpd_running, tags: common}\n - name: \'PreUpgrade step0,validation: Check if gnocchi_api_wsgi is running\'\n shell: systemctl status \'httpd\' | grep -q gnocchi\n tags: validation\n when: [step|int == 0, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - name: Stop and disable httpd service\n service: name=httpd state=stopped enabled=no\n when: [step|int == 2, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - {command: systemctl is-enabled --quiet openstack-gnocchi-metricd, ignore_errors: true,\n name: Check if gnocchi_metricd is deployed, register: gnocchi_metricd_enabled,\n tags: common}\n - command: systemctl is-active --quiet openstack-gnocchi-metricd\n name: \'PreUpgrade step0,validation: Check service openstack-gnocchi-metricd\n is running\'\n tags: validation\n when: [step|int == 0, gnocchi_metricd_enabled.rc == 0]\n - name: Stop and disable openstack-gnocchi-metricd service\n service: name=openstack-gnocchi-metricd.service state=stopped enabled=no\n when: [step|int == 2, gnocchi_metricd_enabled.rc == 0]\n - {command: systemctl is-enabled --quiet openstack-gnocchi-statsd, ignore_errors: true,\n name: Check if gnocchi_statsd is deployed, register: gnocchi_statsd_enabled,\n tags: common}\n - command: systemctl is-active --quiet openstack-gnocchi-statsd\n name: \'PreUpgrade step0,validation: Check service openstack-gnocchi-statsd is\n running\'\n tags: validation\n when: [step|int == 0, gnocchi_statsd_enabled.rc == 0]\n - name: Stop and disable openstack-gnocchi-statsd service\n service: name=openstack-gnocchi-statsd.service state=stopped enabled=no\n when: [step|int == 2, gnocchi_statsd_enabled.rc == 0]\n - name: Get docker haproxy image\n set_fact: {haproxy_docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest\'}\n - block:\n - name: Check for haproxy Kolla configuration\n register: haproxy_kolla_config\n stat: {path: /var/lib/config-data/puppet-generated/haproxy}\n - name: Check if haproxy is already containerized\n set_fact: {haproxy_containerized: \'{{haproxy_kolla_config.stat.isdir | default(false)}}\'}\n - {command: hiera -c /etc/puppet/hiera.yaml bootstrap_nodeid, name: get bootstrap\n nodeid, register: bootstrap_node, tags: common}\n - {name: set is_bootstrap_node fact, set_fact: \'is_bootstrap_node={{bootstrap_node.stdout|lower\n == ansible_hostname|lower}}\', tags: common}\n name: Set HAProxy upgrade facts\n - block:\n - ignore_errors: true\n name: Check cluster resource status\n pacemaker_resource: {check_mode: true, resource: haproxy, state: started}\n register: haproxy_res\n - block:\n - name: Disable the haproxy cluster resource.\n pacemaker_resource: {resource: haproxy, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - name: Delete the stopped haproxy cluster resource.\n pacemaker_resource: {resource: haproxy, state: delete, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n when: (is_bootstrap_node) and (haproxy_res|succeeded)\n name: haproxy baremetal to container upgrade tasks\n when: [step|int == 1, not haproxy_containerized|bool]\n - block:\n - {name: Get haproxy image id currently used by pacemaker, register: haproxy_current_pcmklatest_id,\n shell: \'docker images | awk \'\'/haproxy.* pcmklatest/{print $3}\'\' | uniq\'}\n - {name: Temporarily tag the current haproxy image id with the upgraded image\n name, shell: \'docker tag {{haproxy_current_pcmklatest_id.stdout}} {{haproxy_docker_image_latest}}\'}\n name: Prepare the switch to new haproxy container image name in pacemaker\n when: [step|int == 0, haproxy_containerized|bool]\n - ignore_errors: true\n name: Check haproxy-bundle cluster resource status\n pacemaker_resource: {check_mode: false, resource: haproxy-bundle, state: show}\n register: haproxy_pcs_res\n - block:\n - name: Disable the haproxy cluster resource before container upgrade\n pacemaker_resource: {resource: haproxy-bundle, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - block:\n - {command: \'cibadmin --query --xpath "//storage-mapping[@id=\'\'haproxy-var-lib\'\']"\',\n ignore_errors: true, name: Check haproxy stats socket configuration in pacemaker,\n register: haproxy_stats_exposed}\n - {command: \'cibadmin --query --xpath "//storage-mapping[@id=\'\'haproxy-cert\'\']"\',\n ignore_errors: true, name: Check haproxy public certificate configuration\n in pacemaker, register: haproxy_cert_mounted}\n - {command: pcs resource bundle update haproxy-bundle storage-map add id=haproxy-var-lib\n source-dir=/var/lib/haproxy target-dir=/var/lib/haproxy options=rw, name: Add\n a bind mount for stats socket in the haproxy bundle, when: haproxy_stats_exposed.rc\n == 6}\n - name: Set HAProxy public cert volume mount fact\n set_fact: {haproxy_public_cert_path: /etc/pki/tls/private/overcloud_endpoint.pem,\n haproxy_public_tls_enabled: false}\n - command: pcs resource bundle update haproxy-bundle storage-map add id=haproxy-cert\n source-dir={{ haproxy_public_cert_path }} target-dir=/var/lib/kolla/config_files/src-tls/{{\n haproxy_public_cert_path }} options=ro\n name: Add a bind mount for public certificate in the haproxy bundle\n when: [haproxy_cert_mounted.rc == 6, haproxy_public_tls_enabled|bool]\n name: Expose HAProxy stats socket on the host and mount TLS cert if needed\n - {command: \'pcs resource bundle update haproxy-bundle container image={{haproxy_docker_image_latest}}\',\n name: Update the haproxy bundle to use the new container image name}\n - name: Enable the haproxy cluster resource\n pacemaker_resource: {resource: haproxy-bundle, state: enable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n name: Update haproxy pcs resource bundle for new container image\n when: [step|int == 1, haproxy_containerized|bool, is_bootstrap_node, haproxy_pcs_res|succeeded]\n - block:\n - name: Get docker Haproxy image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-haproxy:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest\'}\n - {name: Get previous Haproxy image id, register: haproxy_image_id, shell: \'docker\n images | awk \'\'/haproxy.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Haproxy image, register: haproxy_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{haproxy_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Haproxy image, shell: \'docker\n rm -fv {{item}}\', with_items: \'{{ haproxy_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Haproxy images, shell: \'docker rmi -f {{haproxy_image_id.stdout}}\'}\n when: [haproxy_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Haproxy images}\n - {name: Retag pcmklatest to latest Haproxy image, shell: \'docker tag {{docker_image}}\n {{docker_image_latest}}\'}\n name: Retag the pacemaker image if containerized\n when: [step|int == 3, haproxy_containerized|bool]\n - {command: systemctl is-enabled --quiet openstack-heat-api, ignore_errors: true,\n name: Check if heat_api is deployed, register: heat_api_enabled, tags: common}\n - {ignore_errors: true, name: Check for heat_api running under apache, register: httpd_enabled,\n shell: httpd -t -D DUMP_VHOSTS | grep -q heat_api_wsgi, tags: common}\n - command: systemctl is-active --quiet openstack-heat-api\n name: \'PreUpgrade step0,validation: Check service openstack-heat-api is running\'\n tags: validation\n when: [step|int == 0, heat_api_enabled.rc == 0, httpd_enabled.rc != 0]\n - name: Stop and disable heat_api service (pre-upgrade not under httpd)\n service: name=openstack-heat-api state=stopped enabled=no\n when: [step|int == 2, heat_api_enabled.rc == 0, httpd_enabled.rc != 0]\n - name: \'PreUpgrade step0,validation: Check if heat_api_wsgi is running\'\n shell: systemctl status \'httpd\' | grep -q heat_api_wsgi\n tags: validation\n when: [step|int == 0, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - name: Stop heat_api service (running under httpd)\n service: name=httpd state=stopped\n when: [step|int == 2, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - file: {path: /var/spool/cron/heat, state: absent}\n name: remove old heat cron jobs\n when: step|int == 2\n - {command: systemctl is-enabled openstack-heat-api-cloudwatch, ignore_errors: true,\n name: Check if heat_api_cloudwatch is deployed, register: heat_api_cloudwatch_enabled,\n when: step|int == 1}\n - name: Stop and disable heat_api_cloudwatch service (pre-upgrade not under httpd)\n service: name=openstack-heat-api-cloudwatch state=stopped enabled=no\n when: [step|int == 1, heat_api_cloudwatch_enabled.rc == 0]\n - {command: systemctl is-enabled --quiet openstack-heat-api-cfn, ignore_errors: true,\n name: Check if heat_api_cfn is deployed, register: heat_api_cfn_enabled, tags: common}\n - {ignore_errors: true, name: Check for heat_api_cfn running under apache, register: httpd_enabled,\n shell: httpd -t -D DUMP_VHOSTS | grep -q heat_api_cfn_wsgi, tags: common}\n - command: systemctl is-active --quiet openstack-heat-api-cfn\n name: \'PreUpgrade step0,validation: Check service openstack-heat-api-cfn is\n running\'\n tags: validation\n when: [step|int == 0, heat_api_cfn_enabled.rc == 0, httpd_enabled.rc != 0]\n - name: Stop and disable heat_api_cfn service (pre-upgrade not under httpd)\n service: name=openstack-heat-api-cfn state=stopped enabled=no\n when: [step|int == 2, heat_api_cfn_enabled.rc == 0, httpd_enabled.rc != 0]\n - name: \'PreUpgrade step0,validation: Check if heat_api_cfn_wsgi is running\'\n shell: systemctl status \'httpd\' | grep -q heat_api_cfn_wsgi\n tags: validation\n when: [step|int == 0, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - name: Stop heat_api_cfn service (running under httpd)\n service: name=httpd state=stopped\n when: [step|int == 2, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - {command: systemctl is-enabled --quiet openstack-heat-engine, ignore_errors: true,\n name: Check if heat_engine is deployed, register: heat_engine_enabled, tags: common}\n - command: systemctl is-active --quiet openstack-heat-engine\n name: \'PreUpgrade step0,validation: Check service openstack-heat-engine is running\'\n tags: validation\n when: [step|int == 0, heat_engine_enabled.rc == 0]\n - name: Stop and disable heat_engine service\n service: name=openstack-heat-engine state=stopped enabled=no\n when: [step|int == 2, heat_engine_enabled.rc == 0]\n - {ignore_errors: true, name: Check for horizon running under apache, register: httpd_enabled,\n shell: httpd -t -D DUMP_VHOSTS | grep -q horizon_vhost, tags: common}\n - name: \'PreUpgrade step0,validation: Check if horizon is running\'\n shell: systemctl is-active --quiet httpd\n tags: validation\n when: [step|int == 0, httpd_enabled.rc == 0]\n - name: Stop and disable horizon service (running under httpd)\n service: name=httpd state=stopped enabled=no\n when: [step|int == 2, httpd_enabled.rc == 0]\n - {command: systemctl is-enabled --quiet iscsid, ignore_errors: true, name: Check\n if iscsid service is deployed, register: iscsid_enabled, tags: common}\n - command: systemctl is-active --quiet iscsid\n name: \'PreUpgrade step0,validation: Check if iscsid is running\'\n tags: validation\n when: [step|int == 0, iscsid_enabled.rc == 0]\n - name: Stop and disable iscsid service\n service: name=iscsid state=stopped enabled=no\n when: [step|int == 2, iscsid_enabled.rc == 0]\n - {command: systemctl is-enabled --quiet iscsid.socket, ignore_errors: true, name: Check\n if iscsid.socket service is deployed, register: iscsid_socket_enabled, tags: common}\n - command: systemctl is-active --quiet iscsid.socket\n name: \'PreUpgrade step0,validation: Check if iscsid.socket is running\'\n tags: validation\n when: [step|int == 0, iscsid_socket_enabled.rc == 0]\n - name: Stop and disable iscsid.socket service\n service: name=iscsid.socket state=stopped enabled=no\n when: [step|int == 2, iscsid_socket_enabled.rc == 0]\n - {ignore_errors: true, name: Check for keystone running under apache, register: httpd_enabled,\n shell: httpd -t -D DUMP_VHOSTS | grep -q keystone_wsgi, tags: common}\n - name: \'PreUpgrade step0,validation: Check if keystone_wsgi is running under\n httpd\'\n shell: systemctl status \'httpd\' | grep -q keystone\n tags: validation\n when: [step|int == 0, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - name: Stop and disable keystone service (running under httpd)\n service: name=httpd state=stopped enabled=no\n when: [step|int == 2, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - file: {path: /var/spool/cron/keystone, state: absent}\n name: remove old keystone cron jobs\n when: step|int == 2\n - {command: systemctl is-enabled --quiet memcached, ignore_errors: true, name: Check\n if memcached is deployed, register: memcached_enabled, tags: common}\n - command: systemctl is-active --quiet memcached\n name: \'PreUpgrade step0,validation: Check service memcached is running\'\n tags: validation\n when: [step|int == 0, memcached_enabled.rc == 0]\n - name: Stop and disable memcached service\n service: name=memcached state=stopped enabled=no\n when: [step|int == 2, memcached_enabled.rc == 0]\n - {name: Check for mongodb service, register: mongod_service, stat: path=/usr/lib/systemd/system/mongod.service,\n tags: common}\n - name: Stop and disable mongodb service on upgrade\n service: name=mongod state=stopped enabled=no\n when: [step|int == 1, mongod_service.stat.exists]\n - name: Get docker Mysql image\n set_fact: {mysql_docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest\'}\n - name: Check for Mysql Kolla configuration\n register: mysql_kolla_config\n stat: {path: /var/lib/config-data/puppet-generated/mysql}\n - name: Check if Mysql is already containerized\n set_fact: {mysql_containerized: \'{{mysql_kolla_config.stat.isdir | default(false)}}\'}\n - {command: hiera -c /etc/puppet/hiera.yaml bootstrap_nodeid, name: get bootstrap\n nodeid, register: bootstrap_node, tags: common}\n - {name: set is_bootstrap_node fact, set_fact: \'is_bootstrap_node={{bootstrap_node.stdout|lower\n == ansible_hostname|lower}}\', tags: common}\n - block:\n - ignore_errors: true\n name: Check cluster resource status\n pacemaker_resource: {check_mode: true, resource: galera, state: master}\n register: galera_res\n - block:\n - name: Disable the galera cluster resource\n pacemaker_resource: {resource: galera, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - name: Delete the stopped galera cluster resource.\n pacemaker_resource: {resource: galera, state: delete, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n when: (is_bootstrap_node) and (galera_res|succeeded)\n - {name: Disable mysql service, service: name=mariadb enabled=no}\n - {file: state=absent path=/etc/xinetd.d/galera-monitor, name: Remove clustercheck\n service from xinetd}\n - {name: Restart xinetd service after clustercheck removal, service: name=xinetd\n state=restarted}\n name: Mysql baremetal to container upgrade tasks\n when: [step|int == 1, not mysql_containerized|bool]\n - block:\n - {name: Get galera image id currently used by pacemaker, register: galera_current_pcmklatest_id,\n shell: \'docker images | awk \'\'/mariadb.* pcmklatest/{print $3}\'\' | uniq\'}\n - {name: Temporarily tag the current galera image id with the upgraded image\n name, shell: \'docker tag {{galera_current_pcmklatest_id.stdout}} {{mysql_docker_image_latest}}\'}\n name: Prepare the switch to new galera container image name in pacemaker\n when: [step|int == 0, mysql_containerized|bool]\n - ignore_errors: true\n name: Check galera cluster resource status\n pacemaker_resource: {check_mode: false, resource: galera, state: show}\n register: galera_pcs_res\n - block:\n - name: Disable the galera cluster resource before container upgrade\n pacemaker_resource: {resource: galera, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - block:\n - {command: \'cibadmin --query --xpath "//storage-mapping[@id=\'\'mysql-log\'\']"\',\n ignore_errors: true, name: Check Mysql logging configuration in pacemaker,\n register: mysql_logs_moved}\n - block:\n - {command: pcs resource bundle update galera-bundle storage-map add id=mysql-log\n source-dir=/var/log/containers/mysql target-dir=/var/log/mysql options=rw,\n name: Add a bind mount for logging in the galera bundle}\n - {command: pcs resource update galera log=/var/log/mysql/mysqld.log, name: Reconfigure\n Mysql log file in the galera resource agent}\n name: Change Mysql logging configuration in pacemaker\n when: mysql_logs_moved.rc == 6\n name: Move Mysql logging to /var/log/containers\n - {command: \'pcs resource bundle update galera-bundle container image={{mysql_docker_image_latest}}\',\n name: Update the galera bundle to use the new container image name}\n - name: Enable the galera cluster resource\n pacemaker_resource: {resource: galera, state: enable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n name: Update galera pcs resource bundle for new container image\n when: [step|int == 1, mysql_containerized|bool, is_bootstrap_node, galera_pcs_res|succeeded]\n - block:\n - name: Get docker Mariadb image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest\'}\n - {name: Get previous Mariadb image id, register: mariadb_image_id, shell: \'docker\n images | awk \'\'/mariadb.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Mariadb image, register: mariadb_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{mariadb_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Mariadb image, shell: \'docker\n rm -fv {{item}}\', with_items: \'{{ mariadb_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Mariadb images, shell: \'docker rmi -f {{mariadb_image_id.stdout}}\'}\n when: [mariadb_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Mariadb images}\n - {name: Retag pcmklatest to latest Mariadb image, shell: \'docker tag {{docker_image}}\n {{docker_image_latest}}\'}\n name: Retag the pacemaker image if containerized\n when: [step|int == 3, mysql_containerized|bool]\n - block:\n - {name: Update host mariadb packages, when: step|int == 3, yum: name=mariadb-server-galera\n state=latest}\n - name: Mysql upgrade script\n set_fact: {mysql_upgrade_script: \'{% if mysql_containerized %}kolla_set_configs;\n {% endif %} chown -R mysql:mysql /var/lib/mysql; mysqld_safe --user=mysql\n --wsrep-provider=none --skip-networking --wsrep-on=off & timeout 60 sh\n -c \'\'while ! mysqladmin ping --silent; do sleep 1; done\'\'; mysql_upgrade;\n mysqladmin shutdown\'}\n - name: Bind mounts for temporary container\n set_fact:\n mysql_upgrade_db_bind_mounts: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json\',\n \'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro\',\n \'/var/lib/mysql:/var/lib/mysql\']\n - {name: Upgrade Mysql database from a temporary container, shell: \'/usr/bin/docker\n run --rm --log-driver=syslog -u root --net=host -e "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"\n -v {{ mysql_upgrade_db_bind_mounts | union([\'\'/tmp/mariadb-upgrade:/var/log/mariadb:rw\'\'])\n | join(\'\' -v \'\')}} "192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest"\n /bin/bash -ecx "{{mysql_upgrade_script}}"\', when: mysql_containerized|bool}\n - {name: Upgrade Mysql database from the host, shell: \'/bin/bash -ecx "{{mysql_upgrade_script}}"\',\n when: not mysql_containerized|bool}\n name: Check and upgrade Mysql database after major version upgrade\n when: step|int == 3\n - {command: systemctl is-enabled --quiet neutron-server, ignore_errors: true,\n name: Check if neutron_server is deployed, register: neutron_server_enabled,\n tags: common}\n - command: systemctl is-active --quiet neutron-server\n name: \'PreUpgrade step0,validation: Check service neutron-server is running\'\n tags: validation\n when: [step|int == 0, neutron_server_enabled.rc == 0]\n - name: Stop and disable neutron_api service\n service: name=neutron-server state=stopped enabled=no\n when: [step|int == 2, neutron_server_enabled.rc == 0]\n - name: Set fact for removal of openstack-neutron package\n set_fact: {remove_neutron_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-neutron package if operator requests it\n when: [step|int == 2, remove_neutron_package|bool]\n yum: name=openstack-neutron state=removed\n - {command: systemctl is-enabled --quiet neutron-dhcp-agent, ignore_errors: true,\n name: Check if neutron_dhcp_agent is deployed, register: neutron_dhcp_agent_enabled,\n tags: common}\n - command: systemctl is-active --quiet neutron-dhcp-agent\n name: \'PreUpgrade step0,validation: Check service neutron-dhcp-agent is running\'\n tags: validation\n when: [step|int == 0, neutron_dhcp_agent_enabled.rc == 0]\n - name: Stop and disable neutron_dhcp service\n service: name=neutron-dhcp-agent state=stopped enabled=no\n when: [step|int == 2, neutron_dhcp_agent_enabled.rc == 0]\n - {command: systemctl is-enabled --quiet neutron-metadata-agent, ignore_errors: true,\n name: Check if neutron_metadata_agent is deployed, register: neutron_metadata_agent_enabled,\n tags: common}\n - command: systemctl is-active --quiet neutron-metadata-agent\n name: \'PreUpgrade step0,validation: Check service neutron-metadata-agent is\n running\'\n tags: validation\n when: [step|int == 0, neutron_metadata_agent_enabled.rc == 0]\n - name: Stop and disable neutron_metadata service\n service: name=neutron-metadata-agent state=stopped enabled=no\n when: [step|int == 2, neutron_metadata_agent_enabled.rc == 0]\n - {command: systemctl is-enabled --quiet openstack-nova-api, ignore_errors: true,\n name: Check if nova_api is deployed, register: nova_api_enabled, tags: common}\n - {ignore_errors: true, name: Check for nova-api running under apache, register: httpd_enabled,\n shell: httpd -t -D DUMP_VHOSTS | grep -q \'nova\', tags: common}\n - command: systemctl is-active --quiet openstack-nova-api\n name: \'PreUpgrade step0,validation: Check service openstack-nova-api is running\'\n tags: validation\n when: [step|int == 0, nova_api_enabled.rc == 0, httpd_enabled.rc != 0]\n - name: Stop and disable nova_api service\n service: name=openstack-nova-api state=stopped enabled=no\n when: [step|int == 2, nova_api_enabled.rc == 0, httpd_enabled.rc != 0]\n - name: \'PreUpgrade step0,validation: Check if nova_wsgi is running\'\n shell: systemctl status \'httpd\' | grep -q \'nova\'\n tags: validation\n when: [step|int == 0, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - name: Stop nova_api service (running under httpd)\n service: name=httpd state=stopped\n when: [step|int == 2, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - name: Set fact for removal of openstack-nova-api package\n set_fact: {remove_nova_api_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-nova-api package if operator requests it\n when: [step|int == 2, remove_nova_api_package|bool]\n yum: name=openstack-nova-api state=removed\n - file: {path: /var/spool/cron/nova, state: absent}\n name: remove old nova cron jobs\n when: step|int == 2\n - {command: systemctl is-enabled --quiet openstack-nova-conductor, ignore_errors: true,\n name: Check if nova_conductor is deployed, register: nova_conductor_enabled,\n tags: common}\n - {ini_file: dest=/etc/nova/nova.conf section=upgrade_levels option=compute value=,\n name: Set compute upgrade level to auto, when: step|int == 1}\n - command: systemctl is-active --quiet openstack-nova-conductor\n name: \'PreUpgrade step0,validation: Check service openstack-nova-conductor is\n running\'\n tags: validation\n when: [step|int == 0, nova_conductor_enabled.rc == 0]\n - name: Stop and disable nova_conductor service\n service: name=openstack-nova-conductor state=stopped enabled=no\n when: [step|int == 2, nova_conductor_enabled.rc == 0]\n - name: Set fact for removal of openstack-nova-conductor package\n set_fact: {remove_nova_conductor_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-nova-conductor package if operator requests it\n when: [step|int == 2, remove_nova_conductor_package|bool]\n yum: name=openstack-nova-conductor state=removed\n - {command: systemctl is-enabled --quiet openstack-nova-consoleauth, ignore_errors: true,\n name: Check if nova_consoleauth is deployed, register: nova_consoleauth_enabled,\n tags: common}\n - command: systemctl is-active --quiet openstack-nova-consoleauth\n name: \'PreUpgrade step0,validation: Check service openstack-nova-consoleauth\n is running\'\n tags: validation\n when: [step|int == 0, nova_consoleauth_enabled.rc == 0]\n - name: Stop and disable nova_consoleauth service\n service: name=openstack-nova-consoleauth state=stopped enabled=no\n when: [step|int == 2, nova_consoleauth_enabled.rc == 0]\n - name: Set fact for removal of openstack-nova-console package\n set_fact: {remove_nova_console_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-nova-console package if operator requests it\n when: [step|int == 2, remove_nova_console_package|bool]\n yum: name=openstack-nova-console state=removed\n - {command: systemctl is-enabled --quiet openstack-nova-api, ignore_errors: true,\n name: Check if nova_api_metadata is deployed, register: nova_metadata_enabled,\n tags: common}\n - command: systemctl is-active --quiet openstack-nova-api\n name: \'PreUpgrade step0,validation: Check service openstack-nova-api is running\'\n tags: validation\n when: [step|int == 0, nova_metadata_enabled.rc == 0]\n - name: Stop and disable nova_api service\n service: name=openstack-nova-api state=stopped enabled=no\n when: [step|int == 2, nova_metadata_enabled.rc == 0]\n - {ignore_errors: true, name: Check for nova placement running under apache, register: httpd_enabled,\n shell: httpd -t -D DUMP_VHOSTS | grep -q placement_wsgi, tags: common}\n - name: \'PreUpgrade step0,validation: Check if placement_wsgi is running\'\n shell: systemctl status \'httpd\' | grep -q placement_wsgi\n tags: validation\n when: [step|int == 0, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - name: Stop and disable nova_placement service (running under httpd)\n service: name=httpd state=stopped enabled=no\n when: [step|int == 2, httpd_enabled.rc == 0, httpd_running.rc == 0]\n - {command: systemctl is-enabled --quiet openstack-nova-scheduler, ignore_errors: true,\n name: Check if nova_scheduler is deployed, register: nova_scheduler_enabled,\n tags: common}\n - command: systemctl is-active --quiet openstack-nova-scheduler\n name: \'PreUpgrade step0,validation: Check service openstack-nova-scheduler is\n running\'\n tags: validation\n when: [step|int == 0, nova_scheduler_enabled.rc == 0]\n - name: Stop and disable nova_scheduler service\n service: name=openstack-nova-scheduler state=stopped enabled=no\n when: [step|int == 2, nova_scheduler_enabled.rc == 0]\n - name: Set fact for removal of openstack-nova-scheduler package\n set_fact: {remove_nova_scheduler_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-nova-scheduler package if operator requests it\n when: [step|int == 2, remove_nova_scheduler_package|bool]\n yum: name=openstack-nova-scheduler state=removed\n - {command: systemctl is-enabled --quiet openstack-nova-novncproxy, ignore_errors: true,\n name: Check if nova vncproxy is deployed, register: nova_vncproxy_enabled, tags: common}\n - command: systemctl is-active --quiet openstack-nova-novncproxy\n name: \'PreUpgrade step0,validation: Check service openstack-nova-novncproxy\n is running\'\n tags: validation\n when: [step|int == 0, nova_vncproxy_enabled.rc == 0]\n - name: Stop and disable nova_vnc_proxy service\n service: name=openstack-nova-novncproxy state=stopped enabled=no\n when: [step|int == 2, nova_vncproxy_enabled.rc == 0]\n - name: Set fact for removal of openstack-nova-novncproxy package\n set_fact: {remove_nova_novncproxy_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-nova-novncproxy package if operator requests it\n when: [step|int == 2, remove_nova_novncproxy_package|bool]\n yum: name=openstack-nova-novncproxy state=removed\n - {command: systemctl is-enabled --quiet opendaylight, ignore_errors: true, name: Check\n if opendaylight is deployed, register: opendaylight_enabled, tags: common}\n - command: systemctl is-active --quiet opendaylight\n name: \'PreUpgrade step0,validation: Check service opendaylight is running\'\n tags: validation\n when: [step|int == 0, opendaylight_enabled.rc == 0]\n - name: Stop and disable opendaylight_api service\n service: name=opendaylight state=stopped enabled=no\n when: [step|int == 2, opendaylight_enabled.rc == 0]\n - block:\n - {failed_when: false, name: Check if ODL container is present, register: opendaylight_api_container_present,\n shell: \'docker ps -a --format \'\'{{ \'\'{{\'\' }}.Names{{ \'\'}}\'\' }}\'\' | grep \'\'^opendaylight_api$\'\'\'}\n - {name: Update ODL container restart policy to unless-stopped, shell: docker\n update --restart=unless-stopped opendaylight_api, when: opendaylight_api_container_present.rc\n == 0}\n - docker_container: {name: opendaylight_api, state: stopped}\n name: stop previous ODL container\n when: step|int == 0\n - file: {path: \'/var/lib/opendaylight/{{item}}\', state: absent}\n name: remove data, journal and snapshots\n when: step|int == 0\n with_items: [snapshots, journal, data]\n - copy: {content: "<config xmlns=\\"urn:opendaylight:params:xml:ns:yang:mdsalutil\\"\\\n >\\n <upgradeInProgress>true</upgradeInProgress>\\n</config>\\n", dest: /var/lib/config-data/puppet-generated/opendaylight/opt/opendaylight/etc/opendaylight/datastore/initial/config/genius-mdsalutil-config.xml,\n group: 42462, mode: 420, owner: 42462}\n name: Set ODL upgrade flag to True\n when: step|int == 1\n name: ODL container L2 update and upgrade tasks\n - {ignore_errors: true, name: Check openvswitch version., register: ovs_version,\n shell: \'rpm -qa | awk -F- \'\'/^openvswitch-2/{print $2 "-" $3}\'\'\', when: step|int\n == 2}\n - {ignore_errors: true, name: Check openvswitch packaging., register: ovs_packaging_issue,\n shell: \'rpm -q --scripts openvswitch | awk \'\'/postuninstall/,/*/\'\' | grep -q\n "systemctl.*try-restart"\', when: step|int == 2}\n - block:\n - file: {path: /root/OVS_UPGRADE, state: absent}\n name: \'Ensure empty directory: emptying.\'\n - file: {group: root, mode: 488, owner: root, path: /root/OVS_UPGRADE, state: directory}\n name: \'Ensure empty directory: creating.\'\n - {command: yum makecache, name: Make yum cache.}\n - {command: yumdownloader --destdir /root/OVS_UPGRADE --resolve openvswitch,\n name: Download OVS packages.}\n - {name: Get rpm list for manual upgrade of OVS., register: ovs_list_of_rpms,\n shell: ls -1 /root/OVS_UPGRADE/*.rpm}\n - args: {chdir: /root/OVS_UPGRADE}\n name: Manual upgrade of OVS\n shell: \'rpm -U --test {{item}} 2>&1 | grep "already installed" || \\\n\n rpm -U --replacepkgs --notriggerun --nopostun {{item}};\n\n \'\n with_items: [\'{{ovs_list_of_rpms.stdout_lines}}\']\n when: [step|int == 2, \'\'\'2.5.0-14\'\' in ovs_version.stdout|default(\'\'\'\') or ovs_packaging_issue|default(false)|succeeded\']\n - {command: systemctl is-enabled openvswitch, ignore_errors: true, name: Check\n if openvswitch is deployed, register: openvswitch_enabled, tags: common}\n - command: systemctl is-active --quiet openvswitch\n name: \'PreUpgrade step0,validation: Check service openvswitch is running\'\n tags: validation\n when: [step|int == 0, openvswitch_enabled.rc == 0]\n - name: Stop openvswitch service\n service: name=openvswitch state=stopped\n when: [step|int == 1, openvswitch_enabled.rc == 0]\n - block:\n - iptables: chain=OUTPUT action=insert protocol=tcp destination_port={{ item\n }} jump=DROP\n name: Block connections to ODL.\n when: step|int == 0\n with_items: [6640, 6653, 6633]\n name: ODL container L2 update and upgrade tasks\n - {async: 30, name: Check pacemaker cluster running before upgrade, pacemaker_cluster: state=online\n check_and_fail=true, poll: 4, tags: validation, when: step|int == 0}\n - {name: Stop pacemaker cluster, pacemaker_cluster: state=offline, when: step|int\n == 2}\n - {name: Start pacemaker cluster, pacemaker_cluster: state=online, when: step|int\n == 4}\n - name: Get docker Rabbitmq image\n set_fact: {rabbitmq_docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest\'}\n - name: Check for Rabbitmq Kolla configuration\n register: rabbit_kolla_config\n stat: {path: /var/lib/config-data/puppet-generated/rabbitmq}\n - name: Check if Rabbitmq is already containerized\n set_fact: {rabbit_containerized: \'{{rabbit_kolla_config.stat.isdir | default(false)}}\'}\n - {command: hiera -c /etc/puppet/hiera.yaml bootstrap_nodeid, name: get bootstrap\n nodeid, register: bootstrap_node}\n - {name: set is_bootstrap_node fact, set_fact: \'is_bootstrap_node={{bootstrap_node.stdout|lower\n == ansible_hostname|lower}}\'}\n - block:\n - ignore_errors: true\n name: Check cluster resource status of rabbitmq\n pacemaker_resource: {check_mode: false, resource: rabbitmq, state: show}\n register: rabbitmq_res\n - block:\n - name: Disable the rabbitmq cluster resource.\n pacemaker_resource: {resource: rabbitmq, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - name: Delete the stopped rabbitmq cluster resource.\n pacemaker_resource: {resource: rabbitmq, state: delete, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n when: (is_bootstrap_node) and (rabbitmq_res|succeeded)\n - {name: Disable rabbitmq service, service: name=rabbitmq-server enabled=no}\n name: Rabbitmq baremetal to container upgrade tasks\n when: [step|int == 1, not rabbit_containerized|bool]\n - block:\n - {name: Get rabbitmq image id currently used by pacemaker, register: rabbitmq_current_pcmklatest_id,\n shell: \'docker images | awk \'\'/rabbitmq.* pcmklatest/{print $3}\'\' | uniq\'}\n - {name: Temporarily tag the current rabbitmq image id with the upgraded image\n name, shell: \'docker tag {{rabbitmq_current_pcmklatest_id.stdout}} {{rabbitmq_docker_image_latest}}\'}\n name: Prepare the switch to new rabbitmq container image name in pacemaker\n when: [step|int == 0, rabbit_containerized|bool]\n - ignore_errors: true\n name: Check rabbitmq-bundle cluster resource status\n pacemaker_resource: {check_mode: false, resource: rabbitmq-bundle, state: show}\n register: rabbit_pcs_res\n - block:\n - name: Disable the rabbitmq cluster resource before container upgrade\n pacemaker_resource: {resource: rabbitmq-bundle, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - block:\n - {command: \'cibadmin --query --xpath "//storage-mapping[@id=\'\'rabbitmq-log\'\']"\',\n ignore_errors: true, name: Check rabbitmq logging configuration in pacemaker,\n register: rabbitmq_logs_moved}\n - {command: pcs resource bundle update rabbitmq-bundle storage-map add id=rabbitmq-log\n source-dir=/var/log/containers/rabbitmq target-dir=/var/log/rabbitmq options=rw,\n name: Add a bind mount for logging in the rabbitmq bundle, when: rabbitmq_logs_moved.rc\n == 6}\n name: Move rabbitmq logging to /var/log/containers\n - {command: \'pcs resource bundle update rabbitmq-bundle container image={{rabbitmq_docker_image_latest}}\',\n name: Update the rabbitmq bundle to use the new container image name}\n - name: Enable the rabbitmq cluster resource\n pacemaker_resource: {resource: rabbitmq-bundle, state: enable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n name: Update rabbitmq-bundle pcs resource bundle for new container image\n when: [step|int == 1, rabbit_containerized|bool, is_bootstrap_node, rabbit_pcs_res|succeeded]\n - block:\n - name: Get docker Rabbitmq image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest\'}\n - {name: Get previous Rabbitmq image id, register: rabbitmq_image_id, shell: \'docker\n images | awk \'\'/rabbitmq.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Rabbitmq image, register: rabbitmq_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{rabbitmq_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Rabbitmq image, shell: \'docker\n rm -fv {{item}}\', with_items: \'{{ rabbitmq_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Rabbitmq images, shell: \'docker rmi -f {{rabbitmq_image_id.stdout}}\'}\n when: [rabbitmq_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Rabbitmq images}\n - {name: Retag pcmklatest to latest Rabbitmq image, shell: \'docker tag {{docker_image}}\n {{docker_image_latest}}\'}\n name: Retag the pacemaker image if containerized\n when: [step|int == 3, rabbit_containerized|bool]\n - name: Get docker redis image\n set_fact: {redis_docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest\'}\n - name: Check for redis Kolla configuration\n register: redis_kolla_config\n stat: {path: /var/lib/config-data/puppet-generated/redis}\n - name: Check if redis is already containerized\n set_fact: {redis_containerized: \'{{redis_kolla_config.stat.isdir | default(false)}}\'}\n - block:\n - ignore_errors: true\n name: Check cluster resource status of redis\n pacemaker_resource: {check_mode: false, resource: redis, state: show}\n register: redis_res\n - block:\n - name: Disable the redis cluster resource\n pacemaker_resource: {resource: redis, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - name: Delete the stopped redis cluster resource.\n pacemaker_resource: {resource: redis, state: delete, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n when: (is_bootstrap_node) and (redis_res|succeeded)\n - {name: Disable redis service, service: name=redis enabled=no}\n name: redis baremetal to container upgrade tasks\n when: [step|int == 1, not redis_containerized|bool]\n - block:\n - {name: Get redis image id currently used by pacemaker, register: redis_current_pcmklatest_id,\n shell: \'docker images | awk \'\'/redis.* pcmklatest/{print $3}\'\' | uniq\'}\n - {name: Temporarily tag the current redis image id with the upgraded image\n name, shell: \'docker tag {{redis_current_pcmklatest_id.stdout}} {{redis_docker_image_latest}}\'}\n name: Prepare the switch to new redis container image name in pacemaker\n when: [step|int == 0, redis_containerized|bool]\n - ignore_errors: true\n name: Check redis-bundle cluster resource status\n pacemaker_resource: {check_mode: false, resource: redis-bundle, state: show}\n register: redis_pcs_res\n - block:\n - name: Disable the redis cluster resource before container upgrade\n pacemaker_resource: {resource: redis-bundle, state: disable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n - block:\n - {command: \'cibadmin --query --xpath "//storage-mapping[@id=\'\'redis-log\'\'\n and @source-dir=\'\'/var/log/containers/redis\'\']"\', ignore_errors: true,\n name: Check redis logging configuration in pacemaker, register: redis_logs_moved}\n - block:\n - {command: pcs resource bundle update redis-bundle storage-map remove redis-log,\n name: Remove old bind mount for logging in the redis bundle}\n - {command: pcs resource bundle update redis-bundle storage-map add id=redis-log\n source-dir=/var/log/containers/redis target-dir=/var/log/redis options=rw,\n name: Add a bind mount for logging in the redis bundle}\n name: Change redis logging configuration in pacemaker\n when: redis_logs_moved.rc == 6\n name: Move redis logging to /var/log/containers\n - {command: \'pcs resource bundle update redis-bundle container image={{redis_docker_image_latest}}\',\n name: Update the redis bundle to use the new container image name}\n - name: Enable the redis cluster resource\n pacemaker_resource: {resource: redis-bundle, state: enable, wait_for_resource: true}\n register: output\n retries: 5\n until: output.rc == 0\n name: Update redis-bundle pcs resource bundle for new container image\n when: [step|int == 1, redis_containerized|bool, is_bootstrap_node, redis_pcs_res|succeeded]\n - block:\n - name: Get docker Redis image\n set_fact: {docker_image: \'192.168.24.1:8787/rhosp13/openstack-redis:2018-07-13.1\',\n docker_image_latest: \'192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest\'}\n - {name: Get previous Redis image id, register: redis_image_id, shell: \'docker\n images | awk \'\'/redis.* pcmklatest/{print $3}\'\' | uniq\'}\n - block:\n - {name: Get a list of container using Redis image, register: redis_containers_to_destroy,\n shell: \'docker ps -a -q -f \'\'ancestor={{redis_image_id.stdout}}\'\'\'}\n - {name: Remove any container using the same Redis image, shell: \'docker rm\n -fv {{item}}\', with_items: \'{{ redis_containers_to_destroy.stdout_lines\n }}\'}\n - {name: Remove previous Redis images, shell: \'docker rmi -f {{redis_image_id.stdout}}\'}\n when: [redis_image_id.stdout != \'\']\n - {command: \'docker pull {{docker_image}}\', name: Pull latest Redis images}\n - {name: Retag pcmklatest to latest Redis image, shell: \'docker tag {{docker_image}}\n {{docker_image_latest}}\'}\n name: Retag the pacemaker image if containerized\n when: [step|int == 3, redis_containerized|bool]\n - {name: Stop snmp service, service: name=snmpd state=stopped, when: step|int\n == 1}\n - command: systemctl is-enabled --quiet "{{ item }}"\n ignore_errors: true\n name: Check if swift-proxy or swift-object-expirer are deployed\n register: swift_proxy_services_enabled\n tags: common\n with_items: [openstack-swift-proxy, openstack-swift-object-expirer]\n - command: systemctl is-active --quiet "{{ item.item }}"\n name: \'PreUpgrade step0,validation: Check service openstack-swift-proxy and\n openstack-swift-object-expirer are running\'\n tags: validation\n when: [step|int == 0, item.rc == 0]\n with_items: \'{{ swift_proxy_services_enabled.results }}\'\n - name: Stop and disable swift-proxy and swift-object-expirer services\n service: name={{ item.item }} state=stopped enabled=no\n when: [step|int == 2, item.rc == 0]\n with_items: \'{{ swift_proxy_services_enabled.results }}\'\n - name: Set fact for removal of openstack-swift-proxy package\n set_fact: {remove_swift_proxy_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-swift-proxy package if operator requests it\n when: [step|int == 2, remove_swift_proxy_package|bool]\n yum: name=openstack-swift-proxy state=removed\n - command: systemctl is-enabled --quiet "{{ item }}"\n ignore_errors: true\n name: Check if swift storage services are deployed\n register: swift_services_enabled\n tags: common\n with_items: [openstack-swift-account-auditor, openstack-swift-account-reaper,\n openstack-swift-account-replicator, openstack-swift-account, openstack-swift-container-auditor,\n openstack-swift-container-replicator, openstack-swift-container-updater, openstack-swift-container,\n openstack-swift-object-auditor, openstack-swift-object-replicator, openstack-swift-object-updater,\n openstack-swift-object]\n - command: systemctl is-active --quiet "{{ item.item }}"\n name: \'PreUpgrade step0,validation: Check swift storage services are running\'\n tags: validation\n when: [step|int == 0, item.rc == 0]\n with_items: \'{{ swift_services_enabled.results }}\'\n - name: Stop and disable swift storage services\n service: name={{ item.item }} state=stopped enabled=no\n when: [step|int == 2, item.rc == 0]\n with_items: \'{{ swift_services_enabled.results }}\'\n - name: Set fact for removal of openstack-swift-container,object,account package\n set_fact: {remove_swift_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-swift-container,object,account packages if operator requests\n it\n when: [step|int == 2, remove_swift_package|bool]\n with_items: [openstack-swift-container, openstack-swift-object, openstack-swift-account]\n yum: name={{ item }} state=removed\n - {file: state=absent path=/etc/xinetd.d/rsync, name: Remove rsync service from\n xinetd, register: rsync_service_removed, when: step|int == 2}\n - name: Restart xinetd service after rsync removal\n service: name=xinetd state=restarted\n when: [step|int == 2, rsync_service_removed|changed]\n - args: {creates: /etc/sysconfig/ip6tables.n-o-upgrade}\n name: blank ipv6 rule before activating ipv6 firewall.\n shell: cat /etc/sysconfig/ip6tables > /etc/sysconfig/ip6tables.n-o-upgrade;\n cat</dev/null>/etc/sysconfig/ip6tables\n when: step|int == 3\n - {name: Check yum for rpm-python present, register: rpm_python_check, when: step|int\n == 0, yum: name=rpm-python state=present}\n - fail: msg="rpm-python package was not present before this run! Check environment\n before re-running"\n name: Fail when rpm-python wasn\'t present\n when: [step|int == 0, rpm_python_check.changed != false]\n - {name: Check for os-net-config upgrade, register: os_net_config_need_upgrade,\n shell: \'yum check-upgrade | awk \'\'/os-net-config/{print}\'\'\', when: step|int\n == 3}\n - {ignore_errors: true, name: Check that os-net-config has configuration, register: os_net_config_has_config,\n shell: test -s /etc/os-net-config/config.json, when: step|int == 3}\n - block:\n - {name: Upgrade os-net-config, yum: name=os-net-config state=latest}\n - {changed_when: os_net_config_upgrade.rc == 2, command: os-net-config --no-activate\n -c /etc/os-net-config/config.json -v --detailed-exit-codes, failed_when: \'os_net_config_upgrade.rc\n not in [0,2]\', name: take new os-net-config parameters into account now,\n register: os_net_config_upgrade}\n when: [step|int == 3, os_net_config_need_upgrade.stdout, os_net_config_has_config.rc\n == 0]\n - {name: Update all packages, when: step|int == 3, yum: name=* state=latest}\n role_data_workflow_tasks: {}\n role_name: Controller\ncompute-0:\n hosts:\n 192.168.24.17: {}\n vars:\n ctlplane_ip: 192.168.24.17\n deploy_server_id: ec01cdea-81e5-4680-8df8-788f4f3d3d28\n enabled_networks: [management, storage, ctlplane, external, internal_api, storage_mgmt,\n tenant]\n external_ip: 192.168.24.17\n internal_api_ip: 172.17.1.22\n management_ip: 192.168.24.17\n storage_ip: 172.17.3.16\n storage_mgmt_ip: 192.168.24.17\n tenant_ip: 172.17.2.11\ncompute-1:\n hosts:\n 192.168.24.12: {}\n vars:\n ctlplane_ip: 192.168.24.12\n deploy_server_id: 61d7f438-c2d0-495e-bf7a-56900b927446\n enabled_networks: [management, storage, ctlplane, external, internal_api, storage_mgmt,\n tenant]\n external_ip: 192.168.24.12\n internal_api_ip: 172.17.1.21\n management_ip: 192.168.24.12\n storage_ip: 172.17.3.14\n storage_mgmt_ip: 192.168.24.12\n tenant_ip: 172.17.2.17\nCompute:\n children:\n compute-0: {}\n compute-1: {}\n vars:\n ansible_ssh_user: heat-admin\n bootstrap_server_id: 0d25b3fa-5154-47be-9ced-05bdd8d3ca43\n role_data_cellv2_discovery: true\n role_data_config_settings: {}\n role_data_deploy_steps_tasks: []\n role_data_docker_config:\n step_3:\n iscsid:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-iscsid:2018-07-13.1\n net: host\n privileged: true\n restart: always\n start_order: 2\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/dev/:/dev/\', \'/run/:/run/\', \'/sys:/sys\', \'/lib/modules:/lib/modules:ro\',\n \'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro\']\n nova_libvirt:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-nova-libvirt:2018-07-13.1\n net: host\n pid: host\n privileged: true\n restart: always\n start_order: 1\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro\',\n \'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro\', \'/lib/modules:/lib/modules:ro\',\n \'/dev:/dev\', \'/run:/run\', \'/sys/fs/cgroup:/sys/fs/cgroup\', \'/var/lib/nova:/var/lib/nova:shared\',\n \'/etc/libvirt:/etc/libvirt\', \'/var/run/libvirt:/var/run/libvirt\', \'/var/lib/libvirt:/var/lib/libvirt\',\n \'/var/log/containers/libvirt:/var/log/libvirt\', \'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro\',\n \'/var/lib/vhost_sockets:/var/lib/vhost_sockets\', \'/sys/fs/selinux:/sys/fs/selinux\']\n nova_virtlogd:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-nova-libvirt:2018-07-13.1\n net: host\n pid: host\n privileged: true\n restart: always\n start_order: 0\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro\',\n \'/lib/modules:/lib/modules:ro\', \'/dev:/dev\', \'/run:/run\', \'/sys/fs/cgroup:/sys/fs/cgroup\',\n \'/var/lib/nova:/var/lib/nova:shared\', \'/var/run/libvirt:/var/run/libvirt\',\n \'/var/lib/libvirt:/var/lib/libvirt\', \'/etc/libvirt/qemu:/etc/libvirt/qemu:ro\',\n \'/var/log/libvirt/qemu:/var/log/libvirt/qemu\']\n step_4:\n ceilometer_agent_compute:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-ceilometer-compute:2018-07-13.1\n net: host\n privileged: false\n restart: always\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro\',\n \'/var/run/libvirt:/var/run/libvirt:ro\', \'/var/log/containers/ceilometer:/var/log/ceilometer\']\n logrotate_crond:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-cron:2018-07-13.1\n net: none\n pid: host\n privileged: true\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro\',\n \'/var/log/containers:/var/log/containers\']\n nova_compute:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n healthcheck: {test: /openstack/healthcheck}\n image: 192.168.24.1:8787/rhosp13/openstack-nova-compute:2018-07-13.1\n ipc: host\n net: host\n privileged: true\n restart: always\n ulimit: [nofile=1024]\n user: nova\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/log/containers/nova:/var/log/nova\', \'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro\',\n \'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro\', \'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro\',\n \'/dev:/dev\', \'/lib/modules:/lib/modules:ro\', \'/run:/run\', \'/var/lib/nova:/var/lib/nova:shared\',\n \'/var/lib/libvirt:/var/lib/libvirt\', \'/sys/class/net:/sys/class/net\',\n \'/sys/bus/pci:/sys/bus/pci\']\n nova_migration_target:\n environment: [KOLLA_CONFIG_STRATEGY=COPY_ALWAYS]\n image: 192.168.24.1:8787/rhosp13/openstack-nova-compute:2018-07-13.1\n net: host\n privileged: true\n restart: always\n user: root\n volumes: [\'/etc/hosts:/etc/hosts:ro\', \'/etc/localtime:/etc/localtime:ro\',\n \'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\', \'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\',\n \'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\',\n \'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\', \'/dev/log:/dev/log\',\n \'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\', \'/etc/puppet:/etc/puppet:ro\',\n \'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro\',\n \'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro\',\n \'/etc/ssh/:/host-ssh/:ro\', \'/run:/run\', \'/var/lib/nova:/var/lib/nova:shared\']\n role_data_docker_config_scripts: {}\n role_data_docker_puppet_tasks: {}\n role_data_external_deploy_tasks: []\n role_data_external_post_deploy_tasks: []\n role_data_fast_forward_post_upgrade_tasks:\n - name: Register repo type and args\n set_fact:\n fast_forward_repo_args:\n tripleo_repos: {ocata: -b ocata current, pike: -b pike current, queens: -b\n queens current}\n fast_forward_repo_type: custom-script\n - debug: {msg: \'fast_forward_repo_type: {{ fast_forward_repo_type }} fast_forward_repo_args:\n {{ fast_forward_repo_args }}\'}\n - block:\n - git: {dest: /home/stack/tripleo-repos/, repo: \'https://github.com/openstack/tripleo-repos.git\'}\n name: clone tripleo-repos\n - args: {chdir: /home/stack/tripleo-repos/}\n command: python setup.py install\n name: install tripleo-repos\n - {command: \'tripleo-repos {{ fast_forward_repo_args.tripleo_repos[release]\n }}\', name: Enable tripleo-repos}\n when: [ffu_packages_apply|bool, fast_forward_repo_type == \'tripleo-repos\']\n - block:\n - copy: {content: "#!/bin/bash\\nset -e\\necho \\"If you use FastForwardRepoType\\\n \\ \'custom-script\' you have to provide the upgrade repo script content.\\"\\\n \\necho \\"It will be installed as /root/ffu_upgrade_repo.sh on the node\\"\\\n \\necho \\"and passed the upstream name (ocata, pike, queens) of the release\\\n \\ as first argument\\"\\ncase $1 in\\n ocata)\\n subscription-manager\\\n \\ repos --disable=rhel-7-server-openstack-10-rpms\\n subscription-manager\\\n \\ repos --enable=rhel-7-server-openstack-11-rpms\\n ;;\\n pike)\\n \\\n \\ subscription-manager repos --disable=rhel-7-server-openstack-11-rpms\\n\\\n \\ subscription-manager repos --enable=rhel-7-server-openstack-12-rpms\\n\\\n \\ ;;\\n queens)\\n subscription-manager repos --disable=rhel-7-server-openstack-12-rpms\\n\\\n \\ subscription-manager repos --enable=rhel-7-server-openstack-13-rpms\\n\\\n \\ ;;\\n *)\\n echo \\"unknown release $1\\" >&2\\n exit 1\\nesac\\n",\n dest: /root/ffu_update_repo.sh, mode: 448}\n name: Create custom Script for upgrading repo.\n - {name: Execute custom script for upgrading repo., shell: \'/root/ffu_update_repo.sh\n {{release}}\'}\n when: [ffu_packages_apply|bool, fast_forward_repo_type == \'custom-script\']\n role_data_fast_forward_upgrade_tasks:\n - command: systemctl is-enabled openstack-ceilometer-compute\n ignore_errors: true\n name: FFU check if openstack-ceilometer-compute is deployed\n register: ceilometer_agent_compute_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact ceilometer_agent_compute_enabled\n set_fact: {ceilometer_agent_compute_enabled: \'{{ ceilometer_agent_compute_enabled_result.rc\n == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: FFU stop and disable openstack-ceilometer-compute service\n service: name=openstack-ceilometer-compute state=stopped enabled=no\n when: [step|int == 1, release == \'ocata\', ceilometer_agent_compute_enabled|bool]\n - command: systemctl is-enabled --quiet openstack-nova-compute\n ignore_errors: true\n name: Check if nova-compute is deployed\n register: nova_compute_enabled_result\n when: [step|int == 0, release == \'ocata\']\n - name: Set fact nova_compute_enabled\n set_fact: {nova_compute_enabled: \'{{ nova_compute_enabled_result.rc == 0 }}\'}\n when: [step|int == 0, release == \'ocata\']\n - name: Stop and disable nova-compute service\n service: name=openstack-nova-compute state=stopped\n when: [step|int == 1, nova_compute_enabled|bool, release == \'ocata\']\n - name: Register repo type and args\n set_fact:\n fast_forward_repo_args:\n tripleo_repos: {ocata: -b ocata current, pike: -b pike current, queens: -b\n queens current}\n fast_forward_repo_type: custom-script\n when: step|int == 3\n - debug: {msg: \'fast_forward_repo_type: {{ fast_forward_repo_type }} fast_forward_repo_args:\n {{ fast_forward_repo_args }}\'}\n when: step|int == 3\n - block:\n - git: {dest: /home/stack/tripleo-repos/, repo: \'https://github.com/openstack/tripleo-repos.git\'}\n name: clone tripleo-repos\n - args: {chdir: /home/stack/tripleo-repos/}\n command: python setup.py install\n name: install tripleo-repos\n - {command: \'tripleo-repos {{ fast_forward_repo_args.tripleo_repos[release]\n }}\', name: Enable tripleo-repos}\n when: [step|int == 3, ffu_packages_apply|bool, fast_forward_repo_type == \'tripleo-repos\']\n - block:\n - copy: {content: "#!/bin/bash\\nset -e\\necho \\"If you use FastForwardRepoType\\\n \\ \'custom-script\' you have to provide the upgrade repo script content.\\"\\\n \\necho \\"It will be installed as /root/ffu_upgrade_repo.sh on the node\\"\\\n \\necho \\"and passed the upstream name (ocata, pike, queens) of the release\\\n \\ as first argument\\"\\ncase $1 in\\n ocata)\\n subscription-manager\\\n \\ repos --disable=rhel-7-server-openstack-10-rpms\\n subscription-manager\\\n \\ repos --enable=rhel-7-server-openstack-11-rpms\\n ;;\\n pike)\\n \\\n \\ subscription-manager repos --disable=rhel-7-server-openstack-11-rpms\\n\\\n \\ subscription-manager repos --enable=rhel-7-server-openstack-12-rpms\\n\\\n \\ ;;\\n queens)\\n subscription-manager repos --disable=rhel-7-server-openstack-12-rpms\\n\\\n \\ subscription-manager repos --enable=rhel-7-server-openstack-13-rpms\\n\\\n \\ ;;\\n *)\\n echo \\"unknown release $1\\" >&2\\n exit 1\\nesac\\n",\n dest: /root/ffu_update_repo.sh, mode: 448}\n name: Create custom Script for upgrading repo.\n - {name: Execute custom script for upgrading repo., shell: \'/root/ffu_update_repo.sh\n {{release}}\'}\n when: [step|int == 3, ffu_packages_apply|bool, fast_forward_repo_type == \'custom-script\']\n role_data_global_config_settings: {}\n role_data_host_prep_tasks:\n - file: {path: /var/log/containers/ceilometer, state: directory}\n name: create persistent logs directory\n - copy: {content: \'Log files from ceilometer containers can be found under\n\n /var/log/containers/ceilometer.\n\n \', dest: /var/log/ceilometer/readme.txt}\n ignore_errors: true\n name: ceilometer logs readme\n - {name: stat /lib/systemd/system/iscsid.socket, register: stat_iscsid_socket,\n stat: path=/lib/systemd/system/iscsid.socket}\n - {name: Stop and disable iscsid.socket service, service: name=iscsid.socket state=stopped\n enabled=no, when: stat_iscsid_socket.stat.exists}\n - file: {path: /var/log/containers/nova, state: directory}\n name: create persistent logs directory\n - copy: {content: \'Log files from nova containers can be found under\n\n /var/log/containers/nova and /var/log/containers/httpd/nova-*.\n\n \', dest: /var/log/nova/readme.txt}\n ignore_errors: true\n name: nova logs readme\n - file: {path: \'{{ item }}\', state: directory}\n name: create persistent directories\n with_items: [/var/lib/nova, /var/lib/libvirt]\n - file: {path: /etc/ceph, state: directory}\n name: ensure ceph configurations exist\n - name: is Instance HA enabled\n set_fact: {instance_ha_enabled: false}\n - block:\n - file: {path: /var/lib/nova/instanceha, state: directory}\n name: prepare Instance HA script directory\n - copy: {content: "#!/bin/python -utt\\n\\nimport os\\nimport sys\\nimport time\\n\\\n import inspect\\nimport logging\\nimport argparse\\nimport oslo_config.cfg\\n\\\n import requests.exceptions\\n\\ndef is_forced_down(connection, hostname):\\n\\\n \\ services = connection.services.list(host=hostname, binary=\\"nova-compute\\"\\\n )\\n for service in services:\\n if service.forced_down:\\n \\\n \\ return True\\n return False\\n\\ndef evacuations_done(connection,\\\n \\ hostname):\\n # Get a list of migrations.\\n # :param host: (optional)\\\n \\ filter migrations by host name.\\n # :param status: (optional) filter\\\n \\ migrations by status.\\n # :param cell_name: (optional) filter migrations\\\n \\ for a cell.\\n #\\n migrations = connection.migrations.list(host=hostname)\\n\\\n \\n print(\\"Checking %d migrations\\" % len(migrations))\\n for migration\\\n \\ in migrations:\\n # print migration.to_dict()\\n #\\n \\\n \\ # {\\n # u\'status\': u\'error\',\\n # u\'dest_host\': None,\\n\\\n \\ # u\'new_instance_type_id\': 2,\\n # u\'old_instance_type_id\':\\\n \\ 2,\\n # u\'updated_at\': u\'2018-04-22T20:55:29.000000\',\\n \\\n \\ # u\'dest_compute\':\\n # u\'overcloud-novacompute-2.localdomain\',\\n\\\n \\ # u\'migration_type\': u\'live-migration\',\\n # u\'source_node\':\\n\\\n \\ # u\'overcloud-novacompute-0.localdomain\',\\n # u\'id\':\\\n \\ 8,\\n # u\'created_at\': u\'2018-04-22T20:52:58.000000\',\\n \\\n \\ # u\'instance_uuid\':\\n # u\'d1c82ce8-3dc5-48db-b59f-854b3b984ef1\',\\n\\\n \\ # u\'dest_node\':\\n # u\'overcloud-novacompute-2.localdomain\',\\n\\\n \\ # u\'source_compute\':\\n # u\'overcloud-novacompute-0.localdomain\'\\n\\\n \\ # }\\n # Acceptable: done, completed, failed\\n if\\\n \\ migration.status in [\\"running\\", \\"accepted\\", \\"pre-migrating\\"]:\\n\\\n \\ return False\\n return True\\n\\ndef safe_to_start(connection,\\\n \\ hostname):\\n if is_forced_down(connection, hostname):\\n print(\\"\\\n Waiting for fence-down flag to be cleared\\")\\n return False\\n \\\n \\ if not evacuations_done(connection, hostname):\\n print(\\"Waiting\\\n \\ for evacuations to complete or fail\\")\\n return False\\n return\\\n \\ True\\n\\ndef create_nova_connection(options):\\n try:\\n from\\\n \\ novaclient import client\\n from novaclient.exceptions import\\\n \\ NotAcceptable\\n except ImportError:\\n print(\\"Nova not found\\\n \\ or not accessible\\")\\n sys.exit(1)\\n\\n from keystoneauth1\\\n \\ import loading\\n from keystoneauth1 import session\\n from keystoneclient\\\n \\ import discover\\n\\n # Prefer the oldest and strip the leading \'v\'\\n\\\n \\ keystone_versions = discover.available_versions(options[\\"auth_url\\"\\\n ][0])\\n keystone_version = keystone_versions[0][\'id\'][1:]\\n kwargs\\\n \\ = dict(\\n auth_url=options[\\"auth_url\\"][0],\\n username=options[\\"\\\n username\\"][0],\\n password=options[\\"password\\"][0]\\n )\\n\\\n \\n if discover.version_match(\\"2\\", keystone_version):\\n kwargs[\\"\\\n tenant_name\\"] = options[\\"tenant_name\\"][0]\\n\\n elif discover.version_match(\\"\\\n 3\\", keystone_version):\\n kwargs[\\"project_name\\"] = options[\\"\\\n project_name\\"][0]\\n kwargs[\\"user_domain_name\\"] = options[\\"\\\n user_domain_name\\"][0]\\n kwargs[\\"project_domain_name\\"] = options[\\"\\\n project_domain_name\\"][0]\\n\\n loader = loading.get_plugin_loader(\'password\')\\n\\\n \\ keystone_auth = loader.load_from_options(**kwargs)\\n keystone_session\\\n \\ = session.Session(auth=keystone_auth, verify=(not options[\\"insecure\\"\\\n ]))\\n\\n nova_versions = [ \\"2.23\\", \\"2\\" ]\\n for version in nova_versions:\\n\\\n \\ clientargs = inspect.getargspec(client.Client).varargs\\n \\\n \\ # Some versions of Openstack prior to Ocata only\\n # supported\\\n \\ positional arguments for username,\\n # password, and tenant.\\n\\\n \\ #\\n # Versions since Ocata only support named arguments.\\n\\\n \\ #\\n # So we need to use introspection to figure out how\\\n \\ to\\n # create a Nova client.\\n #\\n # Happy days\\n\\\n \\ #\\n if clientargs:\\n # OSP < Ocata\\n \\\n \\ # ArgSpec(args=[\'version\', \'username\', \'password\', \'project_id\',\\\n \\ \'auth_url\'],\\n # varargs=None,\\n # \\\n \\ keywords=\'kwargs\', defaults=(None, None, None, None))\\n \\\n \\ nova = client.Client(version,\\n \\\n \\ None, # User\\n None, # Password\\n \\\n \\ None, # Tenant\\n \\\n \\ None, # Auth URL\\n insecure=options[\\"\\\n insecure\\"],\\n region_name=options[\\"\\\n os_region_name\\"][0],\\n session=keystone_session,\\\n \\ auth=keystone_auth,\\n http_log_debug=options.has_key(\\"\\\n verbose\\"))\\n else:\\n # OSP >= Ocata\\n #\\\n \\ ArgSpec(args=[\'version\'], varargs=\'args\', keywords=\'kwargs\', defaults=None)\\n\\\n \\ nova = client.Client(version,\\n \\\n \\ region_name=options[\\"os_region_name\\"][0],\\n \\\n \\ session=keystone_session, auth=keystone_auth,\\n \\\n \\ http_log_debug=options.has_key(\\"verbose\\"\\\n ))\\n\\n try:\\n nova.hypervisors.list()\\n return\\\n \\ nova\\n\\n except NotAcceptable as e:\\n logging.warning(e)\\n\\\n \\n except Exception as e:\\n logging.warning(\\"Nova connection\\\n \\ failed. %s: %s\\" % (e.__class__.__name__, e))\\n\\n print(\\"Couldn\'t\\\n \\ obtain a supported connection to nova, tried: %s\\\\n\\" % repr(nova_versions))\\n\\\n \\ return None\\n\\n\\nparser = argparse.ArgumentParser(description=\'Process\\\n \\ some integers.\')\\nparser.add_argument(\'--config-file\', dest=\'nova_config\',\\\n \\ action=\'store\',\\n default=\\"/etc/nova/nova.conf\\"\\\n ,\\n help=\'path to nova configuration (default: /etc/nova/nova.conf)\')\\n\\\n parser.add_argument(\'--nova-binary\', dest=\'nova_binary\', action=\'store\',\\n\\\n \\ default=\\"/usr/bin/nova-compute\\",\\n \\\n \\ help=\'path to nova compute binary (default: /usr/bin/nova-compute)\')\\n\\\n parser.add_argument(\'--enable-file\', dest=\'enable_file\', action=\'store\',\\n\\\n \\ default=\\"/var/lib/nova/instanceha/enabled\\",\\n \\\n \\ help=\'file exists if instance HA is enabled on this\\\n \\ host \'\\\\\\n \'(default: /var/lib/nova/instanceha/enabled)\')\\n\\\n \\n\\nsections = {}\\n(args, remaining) = parser.parse_known_args(sys.argv)\\n\\\n \\nconfig = oslo_config.cfg.ConfigParser(args.nova_config, sections)\\n\\\n config.parse()\\nconfig.sections[\\"placement\\"][\\"insecure\\"] = 0\\nconfig.sections[\\"\\\n placement\\"][\\"verbose\\"] = 1\\n\\nif os.path.isfile(args.enable_file):\\n\\\n \\ connection = None\\n while not connection:\\n # Loop in case\\\n \\ the control plane is recovering when we run\\n connection = create_nova_connection(config.sections[\\"\\\n placement\\"])\\n if not connection:\\n time.sleep(10)\\n\\\n \\n while not safe_to_start(connection, config.sections[\\"DEFAULT\\"\\\n ][\\"host\\"][0]):\\n time.sleep(10)\\n\\nreal_args = [args.nova_binary,\\\n \\ \'--config-file\', args.nova_config]\\nreal_args.extend(remaining[1:])\\n\\\n os.execv(args.nova_binary, real_args)\\n", dest: /var/lib/nova/instanceha/check-run-nova-compute,\n mode: 493}\n name: install Instance HA script that runs nova-compute\n - {command: hiera -c /etc/puppet/hiera.yaml compute_instanceha_short_node_names,\n name: Get list of instance HA compute nodes, register: iha_nodes}\n - {file: path=/var/lib/nova/instanceha/enabled state=touch, name: If instance\n HA is enabled on the node activate the evacuation completed check, when: iha_nodes.stdout|lower\n | search(\'"\'+ansible_hostname|lower+\'"\')}\n name: install Instance HA recovery script\n when: instance_ha_enabled|bool\n - file: {path: \'{{ item }}\', state: directory}\n name: create libvirt persistent data directories\n with_items: [/etc/libvirt, /etc/libvirt/secrets, /etc/libvirt/qemu, /var/lib/libvirt,\n /var/log/containers/libvirt]\n - group: {gid: 107, name: qemu, state: present}\n name: ensure qemu group is present on the host\n - name: ensure qemu user is present on the host\n user: {comment: qemu user, group: qemu, name: qemu, shell: /sbin/nologin, state: present,\n uid: 107}\n - file: {group: qemu, owner: qemu, path: /var/lib/vhost_sockets, setype: virt_cache_t,\n seuser: system_u, state: directory}\n name: create directory for vhost-user sockets with qemu ownership\n - {command: /usr/bin/rpm -q libvirt-daemon, failed_when: false, name: check if\n libvirt is installed, register: libvirt_installed}\n - name: make sure libvirt services are disabled\n service: {enabled: false, name: \'{{ item }}\', state: stopped}\n when: libvirt_installed.rc == 0\n with_items: [libvirtd.service, virtlogd.socket]\n role_data_kolla_config:\n /var/lib/kolla/config_files/ceilometer_agent_compute.json:\n command: /usr/bin/ceilometer-polling --polling-namespaces compute --logfile\n /var/log/ceilometer/compute.log\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/iscsid.json:\n command: /usr/sbin/iscsid -f\n config_files:\n - {dest: /etc/iscsi/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-iscsid/*}\n /var/lib/kolla/config_files/logrotate-crond.json:\n command: /usr/sbin/crond -s -n\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n /var/lib/kolla/config_files/nova-migration-target.json:\n command: /usr/sbin/sshd -D -p 2022\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /etc/ssh/, owner: root, perm: \'0600\', source: /host-ssh/ssh_host_*_key}\n /var/lib/kolla/config_files/nova_compute.json:\n command: \'/usr/bin/nova-compute \'\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /etc/iscsi/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-iscsid/*}\n - {dest: /etc/ceph/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-ceph/}\n permissions:\n - {owner: \'nova:nova\', path: /var/log/nova, recurse: true}\n - {owner: \'nova:nova\', path: /var/lib/nova, recurse: true}\n - {owner: \'nova:nova\', path: /etc/ceph/ceph.client.openstack.keyring, perm: \'0600\'}\n /var/lib/kolla/config_files/nova_libvirt.json:\n command: /usr/sbin/libvirtd\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n - {dest: /etc/ceph/, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src-ceph/}\n permissions:\n - {owner: \'nova:nova\', path: /etc/ceph/ceph.client.openstack.keyring, perm: \'0600\'}\n /var/lib/kolla/config_files/nova_virtlogd.json:\n command: /usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf\n config_files:\n - {dest: /, merge: true, preserve_properties: true, source: /var/lib/kolla/config_files/src/*}\n role_data_logging_groups: [root]\n role_data_logging_sources: []\n role_data_merged_config_settings:\n ceilometer::agent::auth::auth_endpoint_type: internalURL\n ceilometer::agent::auth::auth_password: ZUMGXYGsUAsWVRjeZaJfeAv9y\n ceilometer::agent::auth::auth_project_domain_name: Default\n ceilometer::agent::auth::auth_region: regionOne\n ceilometer::agent::auth::auth_tenant_name: service\n ceilometer::agent::auth::auth_url: http://172.17.1.10:5000\n ceilometer::agent::auth::auth_user_domain_name: Default\n ceilometer::agent::compute::instance_discovery_method: libvirt_metadata\n ceilometer::agent::notification::event_pipeline_publishers: [\'gnocchi://\', \'panko://\']\n ceilometer::agent::notification::manage_event_pipeline: true\n ceilometer::agent::notification::manage_pipeline: false\n ceilometer::agent::notification::pipeline_publishers: [\'gnocchi://\']\n ceilometer::agent::polling::manage_polling: false\n ceilometer::debug: true\n ceilometer::dispatcher::gnocchi::archive_policy: low\n ceilometer::dispatcher::gnocchi::filter_project: service\n ceilometer::dispatcher::gnocchi::resources_definition_file: gnocchi_resources.yaml\n ceilometer::dispatcher::gnocchi::url: http://172.17.1.10:8041\n ceilometer::host: \'%{::fqdn}\'\n ceilometer::keystone::authtoken::auth_uri: http://172.17.1.10:5000\n ceilometer::keystone::authtoken::auth_url: http://172.17.1.10:5000\n ceilometer::keystone::authtoken::password: ZUMGXYGsUAsWVRjeZaJfeAv9y\n ceilometer::keystone::authtoken::project_domain_name: Default\n ceilometer::keystone::authtoken::project_name: service\n ceilometer::keystone::authtoken::user_domain_name: Default\n ceilometer::notification_driver: messagingv2\n ceilometer::rabbit_heartbeat_timeout_threshold: 60\n ceilometer::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n ceilometer::rabbit_port: 5672\n ceilometer::rabbit_use_ssl: \'False\'\n ceilometer::rabbit_userid: guest\n ceilometer::snmpd_readonly_user_password: e0e6f3b1f8575fd51ee080d6b2724feef235ed7e\n ceilometer::snmpd_readonly_username: ro_snmp_user\n ceilometer::telemetry_secret: ey9QkWYUbQMUv7hUXn2xzTrvM\n ceilometer_redis_password: jv8TQJ7wGC7M7e6ez2GNPfke7\n cold_migration_ssh_inbound_addr: internal_api\n compute_namespace: true\n kernel_modules:\n nf_conntrack: {}\n nf_conntrack_proto_sctp: {}\n live_migration_ssh_inbound_addr: internal_api\n neutron::agents::ml2::ovs::local_ip: tenant\n neutron::plugins::ovs::opendaylight::allowed_network_types: [local, flat, vlan,\n vxlan, gre]\n neutron::plugins::ovs::opendaylight::enable_dpdk: false\n neutron::plugins::ovs::opendaylight::enable_hw_offload: false\n neutron::plugins::ovs::opendaylight::odl_password: redhat\n neutron::plugins::ovs::opendaylight::odl_username: odladmin\n neutron::plugins::ovs::opendaylight::provider_mappings: [\'datacentre:br-ex\']\n neutron::plugins::ovs::opendaylight::vhostuser_mode: server\n neutron::plugins::ovs::opendaylight::vhostuser_socket_dir: /var/lib/vhost_sockets\n nova::api_database_connection: mysql+pymysql://nova_api:6BkAm2KjhQcBQjbmFfcxWwJUq@172.17.1.10/nova_api?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n nova::cell0_database_connection: mysql+pymysql://nova:6BkAm2KjhQcBQjbmFfcxWwJUq@172.17.1.10/nova_cell0?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n nova::cinder_catalog_info: volumev3:cinderv3:internalURL\n nova::compute::consecutive_build_service_disable_threshold: \'0\'\n nova::compute::instance_usage_audit: true\n nova::compute::instance_usage_audit_period: hour\n nova::compute::libvirt::libvirt_enabled_perf_events: []\n nova::compute::libvirt::libvirt_virt_type: kvm\n nova::compute::libvirt::manage_libvirt_services: false\n nova::compute::libvirt::migration_support: false\n nova::compute::libvirt::qemu::configure_qemu: true\n nova::compute::libvirt::qemu::group: qemu\n nova::compute::libvirt::qemu::max_files: 32768\n nova::compute::libvirt::qemu::max_processes: 131072\n nova::compute::libvirt::services::libvirt_virt_type: kvm\n nova::compute::libvirt::vncserver_listen: internal_api\n nova::compute::neutron::libvirt_vif_driver: \'\'\n nova::compute::pci::passthrough: \'\'\n nova::compute::rbd::ephemeral_storage: false\n nova::compute::rbd::libvirt_images_rbd_ceph_conf: /etc/ceph/ceph.conf\n nova::compute::rbd::libvirt_images_rbd_pool: vms\n nova::compute::rbd::libvirt_rbd_secret_key: AQAvSFhbAAAAABAAp+EMtuy9P+WQwvxTR4GS1A==\n nova::compute::rbd::libvirt_rbd_secret_uuid: 563e8cce-8ff0-11e8-adc7-525400eecd02\n nova::compute::rbd::libvirt_rbd_user: openstack\n nova::compute::rbd::rbd_keyring: client.openstack\n nova::compute::reserved_host_memory: 4096\n nova::compute::vcpu_pin_set: []\n nova::compute::verify_glance_signatures: false\n nova::compute::vncproxy_host: 10.0.0.106\n nova::compute::vncserver_proxyclient_address: internal_api\n nova::cron::archive_deleted_rows::destination: /var/log/nova/nova-rowsflush.log\n nova::cron::archive_deleted_rows::hour: \'0\'\n nova::cron::archive_deleted_rows::max_rows: \'100\'\n nova::cron::archive_deleted_rows::minute: \'1\'\n nova::cron::archive_deleted_rows::month: \'*\'\n nova::cron::archive_deleted_rows::monthday: \'*\'\n nova::cron::archive_deleted_rows::until_complete: false\n nova::cron::archive_deleted_rows::user: nova\n nova::cron::archive_deleted_rows::weekday: \'*\'\n nova::database_connection: mysql+pymysql://nova:6BkAm2KjhQcBQjbmFfcxWwJUq@172.17.1.10/nova?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n nova::db::database_db_max_retries: -1\n nova::db::database_max_retries: -1\n nova::db::sync::db_sync_timeout: 300\n nova::db::sync_api::db_sync_timeout: 300\n nova::debug: true\n nova::glance_api_servers: http://172.17.1.10:9292\n nova::host: \'%{::fqdn}\'\n nova::migration::live_migration_tunnelled: false\n nova::my_ip: internal_api\n nova::network::neutron::dhcp_domain: \'\'\n nova::network::neutron::neutron_auth_type: v3password\n nova::network::neutron::neutron_auth_url: http://192.168.24.10:35357/v3\n nova::network::neutron::neutron_ovs_bridge: br-int\n nova::network::neutron::neutron_password: anbEgsRDNBffKrcVkyZd2wPYr\n nova::network::neutron::neutron_project_name: service\n nova::network::neutron::neutron_region_name: regionOne\n nova::network::neutron::neutron_url: http://172.17.1.10:9696\n nova::network::neutron::neutron_username: neutron\n nova::notification_driver: messagingv2\n nova::notification_format: unversioned\n nova::notify_on_state_change: vm_and_task_state\n nova::placement::auth_url: http://172.17.1.10:5000\n nova::placement::os_interface: internal\n nova::placement::os_region_name: regionOne\n nova::placement::password: 6BkAm2KjhQcBQjbmFfcxWwJUq\n nova::placement::project_name: service\n nova::placement_database_connection: mysql+pymysql://nova_placement:6BkAm2KjhQcBQjbmFfcxWwJUq@172.17.1.10/nova_placement?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf\n nova::purge_config: false\n nova::rabbit_heartbeat_timeout_threshold: 60\n nova::rabbit_password: weVyVyHzxXn9URCQNmHmUCsYg\n nova::rabbit_port: 5672\n nova::rabbit_use_ssl: \'False\'\n nova::rabbit_userid: guest\n nova::use_ipv6: false\n nova::vncproxy::common::vncproxy_host: 10.0.0.106\n nova::vncproxy::common::vncproxy_port: \'6080\'\n nova::vncproxy::common::vncproxy_protocol: http\n ntp::iburst_enable: true\n \'ntp::maxpoll:\': 10\n \'ntp::minpoll:\': 6\n ntp::servers: [clock.redhat.com]\n opendaylight::log_levels: {org.opendaylight.genius: DEBUG, org.opendaylight.netvirt: DEBUG}\n opendaylight::log_max_rollover: 50\n opendaylight::odl_rest_port: \'8081\'\n opendaylight::password: redhat\n opendaylight::username: odladmin\n opendaylight_check_url: restconf/operational/network-topology:network-topology/topology/netvirt:1\n rbd_persistent_storage: false\n snmp::agentaddress: [\'udp:161\', \'udp6:[::1]:161\']\n snmp::snmpd_options: -LS0-5d\n snmpd_network: internal_api_subnet\n sysctl_settings:\n fs.inotify.max_user_instances: {value: 1024}\n fs.suid_dumpable: {value: 0}\n kernel.dmesg_restrict: {value: 1}\n kernel.pid_max: {value: 1048576}\n net.core.netdev_max_backlog: {value: 10000}\n net.ipv4.conf.all.arp_accept: {value: 1}\n net.ipv4.conf.all.log_martians: {value: 1}\n net.ipv4.conf.all.secure_redirects: {value: 0}\n net.ipv4.conf.all.send_redirects: {value: 0}\n net.ipv4.conf.default.accept_redirects: {value: 0}\n net.ipv4.conf.default.log_martians: {value: 1}\n net.ipv4.conf.default.secure_redirects: {value: 0}\n net.ipv4.conf.default.send_redirects: {value: 0}\n net.ipv4.ip_forward: {value: 1}\n net.ipv4.neigh.default.gc_thresh1: {value: 1024}\n net.ipv4.neigh.default.gc_thresh2: {value: 2048}\n net.ipv4.neigh.default.gc_thresh3: {value: 4096}\n net.ipv4.tcp_keepalive_intvl: {value: 1}\n net.ipv4.tcp_keepalive_probes: {value: 5}\n net.ipv4.tcp_keepalive_time: {value: 5}\n net.ipv6.conf.all.accept_ra: {value: 0}\n net.ipv6.conf.all.accept_redirects: {value: 0}\n net.ipv6.conf.all.autoconf: {value: 0}\n net.ipv6.conf.all.disable_ipv6: {value: 0}\n net.ipv6.conf.default.accept_ra: {value: 0}\n net.ipv6.conf.default.accept_redirects: {value: 0}\n net.ipv6.conf.default.autoconf: {value: 0}\n net.ipv6.conf.default.disable_ipv6: {value: 0}\n net.netfilter.nf_conntrack_max: {value: 500000}\n net.nf_conntrack_max: {value: 500000}\n timezone::timezone: Europe/London\n tripleo.nova_libvirt.firewall_rules:\n 200 nova_libvirt:\n dport: [16514, 49152-49215, 5900-6923]\n tripleo.nova_migration_target.firewall_rules:\n 113 nova_migration_target:\n dport: [2022]\n tripleo.ntp.firewall_rules:\n 105 ntp: {dport: 123, proto: udp}\n tripleo.opendaylight_ovs.firewall_rules:\n 118 neutron vxlan networks: {dport: 4789, proto: udp}\n 136 neutron gre networks: {proto: gre}\n tripleo.snmp.firewall_rules:\n 124 snmp: {dport: 161, proto: udp, source: \'%{hiera(\'\'snmpd_network\'\')}\'}\n tripleo::firewall::manage_firewall: true\n tripleo::firewall::purge_firewall_rules: false\n tripleo::packages::enable_install: false\n tripleo::profile::base::certmonger_user::libvirt_postsave_cmd: \'true\'\n tripleo::profile::base::database::mysql::client::enable_ssl: false\n tripleo::profile::base::database::mysql::client::mysql_client_bind_address: internal_api\n tripleo::profile::base::database::mysql::client::ssl_ca: /etc/ipa/ca.crt\n tripleo::profile::base::docker::additional_sockets: [/var/lib/openstack/docker.sock]\n tripleo::profile::base::docker::configure_network: true\n tripleo::profile::base::docker::debug: true\n tripleo::profile::base::docker::docker_options: --log-driver=journald --signature-verification=false\n --iptables=false --live-restore\n tripleo::profile::base::docker::insecure_registries: [\'192.168.24.1:8787\']\n tripleo::profile::base::docker::network_options: --bip=172.31.0.1/24\n tripleo::profile::base::neutron::plugins::ovs::opendaylight::vhostuser_socket_group: qemu\n tripleo::profile::base::neutron::plugins::ovs::opendaylight::vhostuser_socket_user: qemu\n tripleo::profile::base::nova::compute::cinder_nfs_backend: false\n tripleo::profile::base::nova::migration::client::libvirt_enabled: true\n tripleo::profile::base::nova::migration::client::nova_compute_enabled: true\n tripleo::profile::base::nova::migration::client::ssh_port: 2022\n tripleo::profile::base::nova::migration::client::ssh_private_key: \'-----BEGIN\n RSA PRIVATE KEY-----\n\n MIIEpQIBAAKCAQEAy/LJL0ClWufF7gcL+RybBImHOdLn64kKSp8cs6xrIyZDtNod\n\n QkRRrDsAY8PnqQENTVWXbWBehLQQL2lb3frpPrAR07KsyUoO1DWOriPyUyIGpO4M\n\n Q9FREwmxPhPDJg/LDG9VgCjKrkL+yFVuIxfSF5/EkRLbfb00DHN7zs5jOtSLf7B0\n\n Amn80S1GzYkgAaubMBWZpSeAo69SmKVd1ziDuVgb4r8rZ646Jgi1ZSX3fJRbaPSk\n\n 3E4kVJpPWY2ykB9r2zyydGI3XKcsHikLNZx9bNEMdny92xxLDGnsklyErUuZ9R/3\n\n xwMnqypI8mtmz77eS/MFhMFS3fJ8okVQks8fUwIDAQABAoIBAQCbX7N1lEJlJv3b\n\n gPLWLbzLkBq9KrgU8Kouf1lWaJyWgqhCN4ji2zl9hNWfK7hpQKvpprNeWHSplKRf\n\n +lxKmMTpRSnPpeeM0ibJ9KNmd2w9eUamj9Q4NlcVseSd7mBVtuJx7r+si2cdq1x/\n\n MtZdVeBwrv8JptwgxuvIMJK50vI19iitRFQWd6Y+0HBzIqR0QZdr/kRakJTMpka2\n\n atiSUz2Bq8ybEkYQ8zna0E+fNM9I/ibB0JV7fbU3a2X7ZeLLdncaIJIt6KatFass\n\n sWZtVEHnnBa6fZscGAA0DOqJWwFYLUj2A2SBgsc6QnVqnBzQUdozgyYSjCW71Rwt\n\n x6FggtyBAoGBAP8M4DPuog/CD8MFgXBQHVJvGpQjBCVaL4zMKb+qn7vTUV4suQFZ\n\n YO2lJRz3Nst4HHVOFl9kwWCy0M+5pvaz+gebbrxND3KNm97m8U0aXOM2pbcZpBrs\n\n cVNNadNKECly38pz4xW+UinQl5ftldYjWswhiSYX9GKgtn23U+UnUjDpAoGBAMy1\n\n Mptpkt07yrN33OfQNQICIXRg7ap31bs7mHWe8Dv/kxN82WgZ51ohwTV+cTXKkATz\n\n k0rHUMpqM/9SuX/ClbBLqSU11F2TTFDFscyiOnqbaiRkqJ2M0khzasFtreLHqCTO\n\n vdV/fHvBqnF0UPQUtZAblAhAM5ETN1xNqApzOAjbAoGBAOQAw7FJRDFoH6UNF/Cq\n\n ffwCfLUvNHabz+RDY5MHWjKTr6rLuju9hgwMVUg2rBJq9q3bN97heIoUcN0yL1Ne\n\n A0enqO/Gx+d1NoGm3NI7ngw0/yHXV0AGXSzGCLOtAxO6sNsQjFIUyOi+o7Za21cK\n\n VhIkbLHUOlGtMFbke6hgZXZ5AoGBAKyB1g/ZvAXrqTnsPKCteL4khYTJWf9Z1Sdf\n\n ZW9ZbSFiktLNV3i+u5Pc9jDaSRUHiq5hhTJzHMY3EXKMh/3+QJ68Y+ITps7knl9C\n\n +j50R8uixKO+n8mFLoAXo1M11l9R2YSLJLaSJJk17yiE2OOXwBmc4/bAA7Sx+Ok0\n\n F/QWfJYZAoGAKbKbyW8pztncDaOTD2/kJzYiXHlCnctMgNP0brurD/W3iBhTXKS5\n\n R3eWDPS5LKuxswg8fF1LOj8DhwBC9k1Ssu4kbQ4O4OeCr+Hci8FeQP13s98tvzXv\n\n XtIN4KCdIvMe0XBt/ReAbdkd+lhCFzwkIG96Fv7FEsCKCsfDO4ukDp0=\n\n -----END RSA PRIVATE KEY-----\n\n \'\n tripleo::profile::base::nova::migration::target::ssh_authorized_keys: [ssh-rsa\n AAAAB3NzaC1yc2EAAAADAQABAAABAQDL8skvQKVa58XuBwv5HJsEiYc50ufriQpKnxyzrGsjJkO02h1CRFGsOwBjw+epAQ1NVZdtYF6EtBAvaVvd+uk+sBHTsqzJSg7UNY6uI/JTIgak7gxD0VETCbE+E8MmD8sMb1WAKMquQv7IVW4jF9IXn8SREtt9vTQMc3vOzmM61It/sHQCafzRLUbNiSABq5swFZmlJ4Cjr1KYpV3XOIO5WBvivytnrjomCLVlJfd8lFto9KTcTiRUmk9ZjbKQH2vbPLJ0YjdcpyweKQs1nH1s0Qx2fL3bHEsMaeySXIStS5n1H/fHAyerKkjya2bPvt5L8wWEwVLd8nyiRVCSzx9T\n Generated by TripleO]\n tripleo::profile::base::nova::migration::target::ssh_localaddrs: [\'%{hiera(\'\'cold_migration_ssh_inbound_addr\'\')}\',\n \'%{hiera(\'\'live_migration_ssh_inbound_addr\'\')}\']\n tripleo::profile::base::snmp::snmpd_password: e0e6f3b1f8575fd51ee080d6b2724feef235ed7e\n tripleo::profile::base::snmp::snmpd_user: ro_snmp_user\n tripleo::profile::base::sshd::bannertext: \'\'\n tripleo::profile::base::sshd::motd: \'\'\n tripleo::profile::base::sshd::options:\n AcceptEnv: [LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES,\n LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT, LC_IDENTIFICATION\n LC_ALL LANGUAGE, XMODIFIERS]\n AuthorizedKeysFile: .ssh/authorized_keys\n ChallengeResponseAuthentication: \'no\'\n GSSAPIAuthentication: \'yes\'\n GSSAPICleanupCredentials: \'no\'\n HostKey: [/etc/ssh/ssh_host_rsa_key, /etc/ssh/ssh_host_ecdsa_key, /etc/ssh/ssh_host_ed25519_key]\n PasswordAuthentication: \'no\'\n Subsystem: sftp /usr/libexec/openssh/sftp-server\n SyslogFacility: AUTHPRIV\n UseDNS: \'no\'\n UsePAM: \'yes\'\n UsePrivilegeSeparation: sandbox\n X11Forwarding: \'yes\'\n tripleo::profile::base::sshd::port: 22\n tripleo::profile::base::tuned::profile: \'\'\n tripleo::trusted_cas::ca_map: {}\n vswitch::dpdk::driver_type: vfio-pci\n vswitch::dpdk::host_core_list: \'\'\n vswitch::dpdk::memory_channels: \'4\'\n vswitch::dpdk::pmd_core_list: \'\'\n vswitch::dpdk::socket_mem: \'\'\n vswitch::ovs::enable_hw_offload: false\n role_data_monitoring_subscriptions: []\n role_data_post_update_tasks:\n - block:\n - name: store update level to update_level variable\n set_fact: {odl_update_level: 1}\n - block:\n - {command: systemctl is-active --quiet openvswitch, name: Check service openvswitch\n is running, register: openvswitch_running, tags: common}\n - {name: Delete OVS groups and ports, shell: sudo ovs-ofctl -O Openflow13 del-groups\n br-int; for tun_port in $(ovs-vsctl list-ports br-int | grep \'tun\'); do;\n ovs-vsctl del-port br-int $(tun_port); done;, when: (step|int == 0) and\n (openvswitch_running.rc == 0)}\n - {name: Stop openvswitch service, service: name=openvswitch state=stopped,\n when: (step|int == 1) and (openvswitch_running.rc == 0)}\n - iptables: chain=OUTPUT action=insert protocol=tcp destination_port={{ item\n }} jump=DROP state=absent\n name: Unblock OVS port per compute node.\n when: step|int == 2\n with_items: [6640, 6653, 6633]\n - {name: start openvswitch service, service: name=openvswitch state=started,\n when: step|int == 3}\n when: odl_update_level == 2\n role_data_post_upgrade_tasks:\n - {command: systemctl is-active --quiet openvswitch, name: Check service openvswitch\n is running, register: openvswitch_running, tags: common}\n - {name: Delete OVS groups and ports, shell: sudo ovs-ofctl -O Openflow13 del-groups\n br-int; for tun_port in $(ovs-vsctl list-ports br-int | grep \'tun\'); do; ovs-vsctl\n del-port br-int $(tun_port); done;, when: (step|int == 0) and (openvswitch_running.rc\n == 0)}\n - {name: Stop openvswitch service, service: name=openvswitch state=stopped, when: (step|int\n == 1) and (openvswitch_running.rc == 0)}\n - iptables: chain=OUTPUT action=insert protocol=tcp destination_port={{ item }}\n jump=DROP state=absent\n name: Unblock OVS port per compute node.\n when: step|int == 2\n with_items: [6640, 6653, 6633]\n - {name: start openvswitch service, service: name=openvswitch state=started, when: step|int\n == 3}\n role_data_pre_upgrade_rolling_tasks: []\n role_data_puppet_config:\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-07-13.1\',\n config_volume: ceilometer, puppet_tags: ceilometer_config, step_config: \'include\n ::tripleo::profile::base::ceilometer::agent::polling\n\n \'}\n - config_image: 192.168.24.1:8787/rhosp13/openstack-iscsid:2018-07-13.1\n config_volume: iscsid\n puppet_tags: iscsid_config\n step_config: include ::tripleo::profile::base::iscsid\n volumes: [\'/etc/iscsi:/etc/iscsi\']\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-compute:2018-07-13.1\',\n config_volume: nova_libvirt, puppet_tags: \'nova_config,nova_paste_api_ini\',\n step_config: \'# TODO(emilien): figure how to deal with libvirt profile.\n\n # We\'\'ll probably treat it like we do with Neutron plugins.\n\n # Until then, just include it in the default nova-compute role.\n\n include tripleo::profile::base::nova::compute::libvirt\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-compute:2018-07-13.1\',\n config_volume: nova_libvirt, puppet_tags: \'libvirtd_config,nova_config,file,libvirt_tls_password\',\n step_config: \'include tripleo::profile::base::nova::libvirt\n\n\n include ::tripleo::profile::base::database::mysql::client\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-nova-compute:2018-07-13.1\',\n config_volume: nova_libvirt, step_config: \'include ::tripleo::profile::base::sshd\n\n include tripleo::profile::base::nova::migration::target\'}\n - {config_image: \'192.168.24.1:8787/rhosp13/openstack-cron:2018-07-13.1\', config_volume: crond,\n step_config: \'include ::tripleo::profile::base::logging::logrotate\'}\n role_data_service_config_settings: {}\n role_data_service_metadata_settings: null\n role_data_service_names: [ca_certs, ceilometer_agent_compute, docker, iscsid,\n kernel, mysql_client, nova_compute, nova_libvirt, nova_migration_target, ntp,\n logrotate_crond, opendaylight_ovs, snmp, sshd, timezone, tripleo_firewall, tripleo_packages,\n tuned]\n role_data_step_config: "# Copyright 2014 Red Hat, Inc.\\n# All Rights Reserved.\\n\\\n #\\n# Licensed under the Apache License, Version 2.0 (the \\"License\\"); you may\\n\\\n # not use this file except in compliance with the License. You may obtain\\n\\\n # a copy of the License at\\n#\\n# http://www.apache.org/licenses/LICENSE-2.0\\n\\\n #\\n# Unless required by applicable law or agreed to in writing, software\\n#\\\n \\ distributed under the License is distributed on an \\"AS IS\\" BASIS, WITHOUT\\n\\\n # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\\n\\\n # License for the specific language governing permissions and limitations\\n\\\n # under the License.\\n\\n# Common config, from tripleo-heat-templates/puppet/manifests/overcloud_common.pp\\n\\\n # The content of this file will be used to generate\\n# the puppet manifests\\\n \\ for all roles, the placeholder\\n# Compute will be replaced by \'controller\',\\\n \\ \'blockstorage\',\\n# \'cephstorage\' and all the deployed roles.\\n\\nif hiera(\'step\')\\\n \\ >= 4 {\\n hiera_include(\'Compute_classes\', [])\\n}\\n\\n$package_manifest_name\\\n \\ = join([\'/var/lib/tripleo/installed-packages/overcloud_Compute\', hiera(\'step\')])\\n\\\n package_manifest{$package_manifest_name: ensure => present}\\n\\n# End of overcloud_common.pp\\n\\\n \\ninclude ::tripleo::trusted_cas\\ninclude ::tripleo::profile::base::docker\\n\\\n \\ninclude ::tripleo::profile::base::kernel\\ninclude ::tripleo::profile::base::database::mysql::client\\n\\\n include ::tripleo::profile::base::time::ntp\\ninclude tripleo::profile::base::neutron::plugins::ovs::opendaylight\\n\\\n \\ninclude ::tripleo::profile::base::snmp\\n\\ninclude ::tripleo::profile::base::sshd\\n\\\n \\ninclude ::timezone\\ninclude ::tripleo::firewall\\n\\ninclude ::tripleo::packages\\n\\\n \\ninclude ::tripleo::profile::base::tuned"\n role_data_update_tasks:\n - block:\n - {failed_when: false, name: Detect if puppet on the docker profile would restart\n the service, register: puppet_docker_noop_output, shell: "puppet apply --noop\\\n \\ --summarize --detailed-exitcodes --verbose \\\\\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules\\\n \\ \\\\\\n --color=false -e \\"class { \'tripleo::profile::base::docker\': step\\\n \\ => 1, }\\" 2>&1 | \\\\\\nawk -F \\":\\" \'/Out of sync:/ { print $2}\'\\n"}\n - {changed_when: docker_check_update.rc == 100, failed_when: \'docker_check_update.rc\n not in [0, 100]\', name: Is docker going to be updated, register: docker_check_update,\n shell: yum check-update docker}\n - {name: Set docker_rpm_needs_update fact, set_fact: \'docker_rpm_needs_update={{\n docker_check_update.rc == 100 }}\'}\n - {name: Set puppet_docker_is_outofsync fact, set_fact: \'puppet_docker_is_outofsync={{\n puppet_docker_noop_output.stdout|trim|int >= 1 }}\'}\n - {name: Stop all containers, shell: docker ps -q | xargs --no-run-if-empty\n -n1 docker stop, when: puppet_docker_is_outofsync or docker_rpm_needs_update}\n - name: Stop docker\n service: {name: docker, state: stopped}\n when: puppet_docker_is_outofsync or docker_rpm_needs_update\n - {name: Update the docker package, when: docker_rpm_needs_update, yum: name=docker\n state=latest update_cache=yes}\n - {changed_when: puppet_docker_apply.rc == 2, failed_when: \'puppet_docker_apply.rc\n not in [0, 2]\', name: Apply puppet which will start the service again, register: puppet_docker_apply,\n shell: "puppet apply --detailed-exitcodes --verbose \\\\\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules\\\n \\ \\\\\\n -e \\"class { \'tripleo::profile::base::docker\': step => 1, }\\"\\n"}\n when: step|int == 2\n - block:\n - name: store update level to update_level variable\n set_fact: {odl_update_level: 1}\n name: Get ODL update level\n - block:\n - iptables: chain=OUTPUT action=insert protocol=tcp destination_port={{ item\n }} jump=DROP\n name: Block connections to ODL.\n when: step|int == 0\n with_items: [6640, 6653, 6633]\n name: Run L2 update tasks that are similar to upgrade_tasks when update level\n is 2\n when: odl_update_level == 2\n - {name: Check for existing yum.pid, register: yum_pid_file, stat: path=/var/run/yum.pid,\n when: step|int == 0 or step|int == 3}\n - {fail: msg="ERROR existing yum.pid detected - can\'t continue! Please ensure\n there is no other package update process for the duration of the minor update\n worfklow. Exiting.", name: Exit if existing yum process, when: (step|int ==\n 0 or step|int == 3) and yum_pid_file.stat.exists}\n - {name: Update all packages, when: step == "3", yum: name=* state=latest update_cache=yes}\n role_data_upgrade_batch_tasks: []\n role_data_upgrade_tasks:\n - {command: systemctl is-enabled --quiet openstack-ceilometer-compute, ignore_errors: true,\n name: Check if openstack-ceilometer-compute is deployed, register: openstack_ceilometer_compute_enabled,\n tags: common}\n - {command: systemctl is-enabled --quiet openstack-ceilometer-polling, ignore_errors: true,\n name: Check if openstack-ceilometer-polling is deployed, register: openstack_ceilometer_polling_enabled,\n tags: common}\n - command: systemctl is-active --quiet openstack-ceilometer-compute\n name: \'PreUpgrade step0,validation: Check service openstack-ceilometer-compute\n is running\'\n tags: validation\n when: [step|int == 0, openstack_ceilometer_compute_enabled.rc == 0]\n - command: systemctl is-active --quiet openstack-ceilometer-polling\n name: \'PreUpgrade step0,validation: Check service openstack-ceilometer-polling\n is running\'\n tags: validation\n when: [step|int == 0, openstack_ceilometer_polling_enabled.rc == 0]\n - name: Stop and disable ceilometer compute agent\n service: name=openstack-ceilometer-compute state=stopped enabled=no\n when: [step|int == 2, openstack_ceilometer_compute_enabled.rc|default(\'\') ==\n 0]\n - name: Stop and disable ceilometer polling agent\n service: name=openstack-ceilometer-polling state=stopped enabled=no\n when: [step|int == 2, openstack_ceilometer_polling_enabled.rc|default(\'\') ==\n 0]\n - name: Set fact for removal of openstack-ceilometer-compute and polling package\n set_fact: {remove_ceilometer_compute_polling_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-ceilometer-compute package if operator requests it\n when: [step|int == 2, remove_ceilometer_compute_polling_package|bool]\n yum: name=openstack-ceilometer-compute state=removed\n - ignore_errors: true\n name: Remove openstack-ceilometer-polling package if operator requests it\n when: [step|int == 2, remove_ceilometer_compute_polling_package|bool]\n yum: name=openstack-ceilometer-polling state=removed\n - {name: Install docker packages on upgrade if missing, when: step|int == 3, yum: name=docker\n state=latest}\n - {command: systemctl is-enabled --quiet iscsid, ignore_errors: true, name: Check\n if iscsid service is deployed, register: iscsid_enabled, tags: common}\n - command: systemctl is-active --quiet iscsid\n name: \'PreUpgrade step0,validation: Check if iscsid is running\'\n tags: validation\n when: [step|int == 0, iscsid_enabled.rc == 0]\n - name: Stop and disable iscsid service\n service: name=iscsid state=stopped enabled=no\n when: [step|int == 2, iscsid_enabled.rc == 0]\n - {command: systemctl is-enabled --quiet iscsid.socket, ignore_errors: true, name: Check\n if iscsid.socket service is deployed, register: iscsid_socket_enabled, tags: common}\n - command: systemctl is-active --quiet iscsid.socket\n name: \'PreUpgrade step0,validation: Check if iscsid.socket is running\'\n tags: validation\n when: [step|int == 0, iscsid_socket_enabled.rc == 0]\n - name: Stop and disable iscsid.socket service\n service: name=iscsid.socket state=stopped enabled=no\n when: [step|int == 2, iscsid_socket_enabled.rc == 0]\n - {command: systemctl is-enabled --quiet openstack-nova-compute, ignore_errors: true,\n name: Check if nova_compute is deployed, register: nova_compute_enabled, tags: common}\n - {ini_file: dest=/etc/nova/nova.conf section=upgrade_levels option=compute value=,\n name: Set compute upgrade level to auto, when: step|int == 1}\n - command: systemctl is-active --quiet openstack-nova-compute\n name: \'PreUpgrade step0,validation: Check service openstack-nova-compute is\n running\'\n tags: validation\n when: [step|int == 0, nova_compute_enabled.rc == 0]\n - name: Stop and disable nova-compute service\n service: name=openstack-nova-compute state=stopped enabled=no\n when: [step|int == 2, nova_compute_enabled.rc == 0]\n - name: Set fact for removal of openstack-nova-compute package\n set_fact: {remove_nova_compute_package: false}\n when: step|int == 2\n - ignore_errors: true\n name: Remove openstack-nova-compute package if operator requests it\n when: [step|int == 2, remove_nova_compute_package|bool]\n yum: name=openstack-nova-compute state=removed\n - {command: systemctl is-enabled --quiet libvirtd, ignore_errors: true, name: Check\n if nova_libvirt is deployed, register: nova_libvirt_enabled, tags: common}\n - command: systemctl is-active --quiet libvirtd\n name: \'PreUpgrade step0,validation: Check service libvirtd is running\'\n tags: validation\n when: [step|int == 0, nova_libvirt_enabled.rc == 0]\n - name: Stop and disable libvirtd service\n service: name=libvirtd state=stopped enabled=no\n when: [step|int == 2, nova_libvirt_enabled.rc == 0]\n - {ignore_errors: true, name: Check openvswitch version., register: ovs_version,\n shell: \'rpm -qa | awk -F- \'\'/^openvswitch-2/{print $2 "-" $3}\'\'\', when: step|int\n == 2}\n - {ignore_errors: true, name: Check openvswitch packaging., register: ovs_packaging_issue,\n shell: \'rpm -q --scripts openvswitch | awk \'\'/postuninstall/,/*/\'\' | grep -q\n "systemctl.*try-restart"\', when: step|int == 2}\n - block:\n - file: {path: /root/OVS_UPGRADE, state: absent}\n name: \'Ensure empty directory: emptying.\'\n - file: {group: root, mode: 488, owner: root, path: /root/OVS_UPGRADE, state: directory}\n name: \'Ensure empty directory: creating.\'\n - {command: yum makecache, name: Make yum cache.}\n - {command: yumdownloader --destdir /root/OVS_UPGRADE --resolve openvswitch,\n name: Download OVS packages.}\n - {name: Get rpm list for manual upgrade of OVS., register: ovs_list_of_rpms,\n shell: ls -1 /root/OVS_UPGRADE/*.rpm}\n - args: {chdir: /root/OVS_UPGRADE}\n name: Manual upgrade of OVS\n shell: \'rpm -U --test {{item}} 2>&1 | grep "already installed" || \\\n\n rpm -U --replacepkgs --notriggerun --nopostun {{item}};\n\n \'\n with_items: [\'{{ovs_list_of_rpms.stdout_lines}}\']\n when: [step|int == 2, \'\'\'2.5.0-14\'\' in ovs_version.stdout|default(\'\'\'\') or ovs_packaging_issue|default(false)|succeeded\']\n - {command: systemctl is-enabled openvswitch, ignore_errors: true, name: Check\n if openvswitch is deployed, register: openvswitch_enabled, tags: common}\n - command: systemctl is-active --quiet openvswitch\n name: \'PreUpgrade step0,validation: Check service openvswitch is running\'\n tags: validation\n when: [step|int == 0, openvswitch_enabled.rc == 0]\n - name: Stop openvswitch service\n service: name=openvswitch state=stopped\n when: [step|int == 1, openvswitch_enabled.rc == 0]\n - block:\n - iptables: chain=OUTPUT action=insert protocol=tcp destination_port={{ item\n }} jump=DROP\n name: Block connections to ODL.\n when: step|int == 0\n with_items: [6640, 6653, 6633]\n name: ODL container L2 update and upgrade tasks\n - {name: Stop snmp service, service: name=snmpd state=stopped, when: step|int\n == 1}\n - args: {creates: /etc/sysconfig/ip6tables.n-o-upgrade}\n name: blank ipv6 rule before activating ipv6 firewall.\n shell: cat /etc/sysconfig/ip6tables > /etc/sysconfig/ip6tables.n-o-upgrade;\n cat</dev/null>/etc/sysconfig/ip6tables\n when: step|int == 3\n - {name: Check yum for rpm-python present, register: rpm_python_check, when: step|int\n == 0, yum: name=rpm-python state=present}\n - fail: msg="rpm-python package was not present before this run! Check environment\n before re-running"\n name: Fail when rpm-python wasn\'t present\n when: [step|int == 0, rpm_python_check.changed != false]\n - {name: Check for os-net-config upgrade, register: os_net_config_need_upgrade,\n shell: \'yum check-upgrade | awk \'\'/os-net-config/{print}\'\'\', when: step|int\n == 3}\n - {ignore_errors: true, name: Check that os-net-config has configuration, register: os_net_config_has_config,\n shell: test -s /etc/os-net-config/config.json, when: step|int == 3}\n - block:\n - {name: Upgrade os-net-config, yum: name=os-net-config state=latest}\n - {changed_when: os_net_config_upgrade.rc == 2, command: os-net-config --no-activate\n -c /etc/os-net-config/config.json -v --detailed-exit-codes, failed_when: \'os_net_config_upgrade.rc\n not in [0,2]\', name: take new os-net-config parameters into account now,\n register: os_net_config_upgrade}\n when: [step|int == 3, os_net_config_need_upgrade.stdout, os_net_config_has_config.rc\n == 0]\n - {name: Update all packages, when: step|int == 3, yum: name=* state=latest}\n role_data_workflow_tasks: {}\n role_name: Compute\novercloud:\n children:\n Compute: {}\n Controller: {}\n vars: {ctlplane_vip: 192.168.24.10, external_vip: 10.0.0.106, internal_api_vip: 172.17.1.10,\n redis_vip: 172.17.1.17, storage_mgmt_vip: 172.17.4.17, storage_vip: 172.17.3.10}\naodh_evaluator:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nkernel:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nneutron_metadata:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\npacemaker:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_placement:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nsnmp:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nheat_api:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ncinder_api:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nswift_proxy:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\naodh_listener:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nswift_ringbuilder:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nneutron_dhcp:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ngnocchi_api:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ntimezone:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nceilometer_agent_central:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nheat_api_cloudwatch_disabled:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nneutron_plugin_ml2_odl:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\naodh_notifier:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ntripleo_firewall:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nswift_storage:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nredis:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ngnocchi_statsd:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\niscsid:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_conductor:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nmysql_client:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_consoleauth:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nglance_api:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nkeystone:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ncinder_volume:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nopendaylight_ovs:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nceilometer_collector_disabled:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nceilometer_agent_notification:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nmemcached:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nmongodb_disabled:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_api:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\naodh_api:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_metadata:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nheat_engine:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nntp:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nceilometer_expirer_disabled:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nceilometer_api_disabled:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_migration_target:\n children:\n Compute: {}\n vars: {ansible_ssh_user: heat-admin}\ncinder_scheduler:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ngnocchi_metricd:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ntripleo_packages:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_scheduler:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_compute:\n children:\n Compute: {}\n vars: {ansible_ssh_user: heat-admin}\nopendaylight_api:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nlogrotate_crond:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nhaproxy:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nsshd:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nmysql:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nceilometer_agent_compute:\n children:\n Compute: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_libvirt:\n children:\n Compute: {}\n vars: {ansible_ssh_user: heat-admin}\nrabbitmq:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ntuned:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\npanko_api:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nhorizon:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nneutron_api:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nca_certs:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nheat_api_cfn:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\ndocker:\n children:\n Compute: {}\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nnova_vnc_proxy:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nclustercheck:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\nglance_registry_disabled:\n children:\n Controller: {}\n vars: {ansible_ssh_user: heat-admin}\n_meta:\n hostvars: {}\n', u'work_dir': u'/var/lib/mistral', u'verbosity': 1, u'skip_tags': u'', u'playbook': u'update_steps_playbook.yaml', u'ansible_extra_env_variables': {u'ANSIBLE_HOST_KEY_CHECKING': u'False', u'ANSIBLE_LOG_PATH': u'/var/log/mistral/package_update.log'}, u'module_path': u'/usr/share/ansible-modules', u'nodes': u'Compute', u'node_user': u'heat-admin', u'ansible_queue_name': u'update'}, u'id': u'5596ad14-bf1f-49c6-aac6-74459f29782f'}} >[stack@undercloud-0 ~]$ >[stack@undercloud-0 ~]$ >[stack@undercloud-0 ~]$ >[stack@undercloud-0 ~]$ >[stack@undercloud-0 ~]$ cat /etc/yum >yum/ yum.conf yum.repos.d/ >[stack@undercloud-0 ~]$ cat /etc/yum.repos.d/latest-installed >13 -p 2018-07-13.1 >[stack@undercloud-0 ~]$
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1597666
:
1472060
|
1472061
| 1472780 |
1472820
|
1473379
|
1475287
|
1479494