Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1455969 Details for
Bug 1597100
failing to import and register the nodes to the director, getting error Background on this error at: http://sqlalche.me/e/f405)
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
install-undercloud
install-undercloud.log (text/plain), 2.41 MB, created by
svegesna
on 2018-07-02 14:47:08 UTC
(
hide
)
Description:
install-undercloud
Filename:
MIME Type:
Creator:
svegesna
Created:
2018-07-02 14:47:08 UTC
Size:
2.41 MB
patch
obsolete
>2018-06-21 16:02:12,111 INFO: Logging to /home/sudheer/.instack/install-undercloud.log >2018-06-21 16:02:12,346 INFO: Checking for a FQDN hostname... >2018-06-21 16:02:12,364 INFO: Static hostname detected as facebook >2018-06-21 16:02:12,377 INFO: Transient hostname detected as facebook >2018-06-21 16:02:12,379 WARNING: Option "undercloud_public_vip" from group "DEFAULT" is deprecated. Use option "undercloud_public_host" from group "DEFAULT". >2018-06-21 16:02:12,379 WARNING: Option "undercloud_admin_vip" from group "DEFAULT" is deprecated. Use option "undercloud_admin_host" from group "DEFAULT". >2018-06-21 16:02:12,379 WARNING: Option "masquerade_network" from group "DEFAULT" is deprecated for removal (With support for routed networks, masquerading of the provisioning networks is moved to a boolean option for each subnet.). Its value may be silently ignored in the future. >2018-06-21 16:02:12,380 WARNING: Option "ipxe_deploy" from group "DEFAULT" is deprecated. Use option "ipxe_enabled" from group "DEFAULT". >2018-06-21 16:02:12,380 WARNING: Option "network_cidr" from group "DEFAULT" is deprecated. Use option "cidr" from group "ctlplane-subnet". >2018-06-21 16:02:12,380 WARNING: Option "dhcp_start" from group "DEFAULT" is deprecated. Use option "dhcp_start" from group "ctlplane-subnet". >2018-06-21 16:02:12,380 WARNING: Option "dhcp_end" from group "DEFAULT" is deprecated. Use option "dhcp_end" from group "ctlplane-subnet". >2018-06-21 16:02:12,380 WARNING: Option "inspection_iprange" from group "DEFAULT" is deprecated. Use option "inspection_iprange" from group "ctlplane-subnet". >2018-06-21 16:02:12,380 WARNING: Option "network_gateway" from group "DEFAULT" is deprecated. Use option "gateway" from group "ctlplane-subnet". >2018-06-21 16:02:12,380 ERROR: Undercloud configuration validation failed: Hostname "facebook" is not fully qualified. >2018-06-21 16:02:12,381 DEBUG: An exception occurred >Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 2309, in install > _validate_configuration() > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 879, in _validate_configuration > _validate_network() > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 801, in _validate_network > validator.validate_config(params, error_handler) > File "/usr/lib/python2.7/site-packages/instack_undercloud/validator.py", line 37, in validate_config > _validate_value_formats(local_params, error_callback) > File "/usr/lib/python2.7/site-packages/instack_undercloud/validator.py", line 107, in _validate_value_formats > error_callback(message) > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 787, in error_handler > raise validator.FailedValidation(message) >FailedValidation: Hostname "facebook" is not fully qualified. >2018-06-21 16:02:12,389 ERROR: >############################################################################# >Undercloud install failed. > >Reason: Hostname "facebook" is not fully qualified. > >See the previous output for details about what went wrong. The full install >log can be found at /home/sudheer/.instack/install-undercloud.log. > >############################################################################# > >2018-06-26 09:10:54,172 INFO: Logging to /home/sudheer/.instack/install-undercloud.log >2018-06-26 09:10:54,268 INFO: Checking for a FQDN hostname... >2018-06-26 09:10:54,295 INFO: Static hostname detected as facebook >2018-06-26 09:10:54,310 INFO: Transient hostname detected as facebook >2018-06-26 09:10:54,310 ERROR: An error occurred during configuration validation, please check your host configuration and try again. Error message: Configured hostname is not fully qualified. >2018-06-26 09:19:55,805 INFO: Logging to /home/sudheer/.instack/install-undercloud.log >2018-06-26 09:19:55,986 INFO: Checking for a FQDN hostname... >2018-06-26 09:19:56,019 INFO: Static hostname detected as facebook >2018-06-26 09:19:56,034 INFO: Transient hostname detected as facebook >2018-06-26 09:19:56,034 ERROR: An error occurred during configuration validation, please check your host configuration and try again. Error message: Configured hostname is not fully qualified. >2018-06-26 09:21:26,118 INFO: Logging to /home/sudheer/.instack/install-undercloud.log >2018-06-26 09:21:26,211 INFO: Checking for a FQDN hostname... >2018-06-26 09:21:26,243 INFO: Static hostname detected as facebook.local.com >2018-06-26 09:21:26,262 INFO: Transient hostname detected as facebook.local.com >2018-06-26 09:21:26,264 WARNING: Option "undercloud_public_vip" from group "DEFAULT" is deprecated. Use option "undercloud_public_host" from group "DEFAULT". >2018-06-26 09:21:26,264 WARNING: Option "undercloud_admin_vip" from group "DEFAULT" is deprecated. Use option "undercloud_admin_host" from group "DEFAULT". >2018-06-26 09:21:26,264 WARNING: Option "masquerade_network" from group "DEFAULT" is deprecated for removal (With support for routed networks, masquerading of the provisioning networks is moved to a boolean option for each subnet.). Its value may be silently ignored in the future. >2018-06-26 09:21:26,265 WARNING: Option "ipxe_deploy" from group "DEFAULT" is deprecated. Use option "ipxe_enabled" from group "DEFAULT". >2018-06-26 09:21:26,265 WARNING: Option "network_cidr" from group "DEFAULT" is deprecated. Use option "cidr" from group "ctlplane-subnet". >2018-06-26 09:21:26,265 WARNING: Option "dhcp_start" from group "DEFAULT" is deprecated. Use option "dhcp_start" from group "ctlplane-subnet". >2018-06-26 09:21:26,265 WARNING: Option "dhcp_end" from group "DEFAULT" is deprecated. Use option "dhcp_end" from group "ctlplane-subnet". >2018-06-26 09:21:26,265 WARNING: Option "inspection_iprange" from group "DEFAULT" is deprecated. Use option "inspection_iprange" from group "ctlplane-subnet". >2018-06-26 09:21:26,265 WARNING: Option "network_gateway" from group "DEFAULT" is deprecated. Use option "gateway" from group "ctlplane-subnet". >2018-06-26 09:21:26,298 INFO: Generated new password for undercloud_admin_token >2018-06-26 09:21:26,299 INFO: Generated new password for undercloud_heat_encryption_key >2018-06-26 09:21:26,299 INFO: Generated new password for undercloud_neutron_password >2018-06-26 09:21:26,299 INFO: Generated new password for undercloud_nova_password >2018-06-26 09:21:26,299 INFO: Generated new password for undercloud_ironic_password >2018-06-26 09:21:26,300 INFO: Generated new password for undercloud_aodh_password >2018-06-26 09:21:26,300 INFO: Generated new password for undercloud_gnocchi_password >2018-06-26 09:21:26,300 INFO: Generated new password for undercloud_ceilometer_password >2018-06-26 09:21:26,300 INFO: Generated new password for undercloud_panko_password >2018-06-26 09:21:26,300 INFO: Generated new password for undercloud_ceilometer_metering_secret >2018-06-26 09:21:26,301 INFO: Generated new password for undercloud_ceilometer_snmpd_password >2018-06-26 09:21:26,301 INFO: Generated new password for undercloud_swift_password >2018-06-26 09:21:26,301 INFO: Generated new password for undercloud_mistral_password >2018-06-26 09:21:26,301 INFO: Generated new password for undercloud_rabbit_cookie >2018-06-26 09:21:26,301 INFO: Generated new password for undercloud_rabbit_password >2018-06-26 09:21:26,301 INFO: Generated new password for undercloud_rabbit_username >2018-06-26 09:21:26,302 INFO: Generated new password for undercloud_heat_stack_domain_admin_password >2018-06-26 09:21:26,302 INFO: Generated new password for undercloud_swift_hash_suffix >2018-06-26 09:21:26,302 INFO: Generated new password for undercloud_haproxy_stats_password >2018-06-26 09:21:26,302 INFO: Generated new password for undercloud_zaqar_password >2018-06-26 09:21:26,302 INFO: Generated new password for undercloud_horizon_secret_key >2018-06-26 09:21:26,302 INFO: Generated new password for undercloud_cinder_password >2018-06-26 09:21:26,302 INFO: Generated new password for undercloud_novajoin_password >2018-06-26 09:21:26,403 INFO: Running yum clean all >2018-06-26 09:21:27,548 INFO: Loaded plugins: langpacks, product-id, search-disabled-repos, subscription- >2018-06-26 09:21:27,549 INFO: : manager >2018-06-26 09:21:32,122 INFO: Cleaning repos: rhel-7-server-extras-rpms rhel-7-server-openstack-beta-rpms >2018-06-26 09:21:32,122 INFO: : rhel-7-server-rh-common-rpms rhel-7-server-rpms >2018-06-26 09:21:32,123 INFO: : rhel-ha-for-rhel-7-server-rpms >2018-06-26 09:21:32,123 INFO: Cleaning up everything >2018-06-26 09:21:32,123 INFO: Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos >2018-06-26 09:21:32,274 INFO: yum-clean-all completed successfully >2018-06-26 09:21:32,274 INFO: Running yum update >2018-06-26 09:21:32,438 INFO: Loaded plugins: langpacks, product-id, search-disabled-repos, subscription- >2018-06-26 09:21:32,438 INFO: : manager >2018-06-26 09:22:09,499 INFO: Resolving Dependencies >2018-06-26 09:22:09,499 INFO: --> Running transaction check >2018-06-26 09:22:09,499 INFO: ---> Package git.x86_64 0:1.8.3.1-13.el7 will be updated >2018-06-26 09:22:09,567 INFO: ---> Package git.x86_64 0:1.8.3.1-14.el7_5 will be an update >2018-06-26 09:22:10,071 INFO: ---> Package perl-Git.noarch 0:1.8.3.1-13.el7 will be updated >2018-06-26 09:22:10,072 INFO: ---> Package perl-Git.noarch 0:1.8.3.1-14.el7_5 will be an update >2018-06-26 09:22:18,299 INFO: --> Finished Dependency Resolution >2018-06-26 09:22:18,396 INFO: >2018-06-26 09:22:18,396 INFO: Dependencies Resolved >2018-06-26 09:22:18,397 INFO: >2018-06-26 09:22:18,397 INFO: ================================================================================ >2018-06-26 09:22:18,397 INFO: Package Arch Version Repository Size >2018-06-26 09:22:18,397 INFO: ================================================================================ >2018-06-26 09:22:18,397 INFO: Updating: >2018-06-26 09:22:18,397 INFO: git x86_64 1.8.3.1-14.el7_5 rhel-7-server-rpms 4.4 M >2018-06-26 09:22:18,398 INFO: perl-Git noarch 1.8.3.1-14.el7_5 rhel-7-server-rpms 54 k >2018-06-26 09:22:18,398 INFO: >2018-06-26 09:22:18,398 INFO: Transaction Summary >2018-06-26 09:22:18,398 INFO: ================================================================================ >2018-06-26 09:22:18,398 INFO: Upgrade 2 Packages >2018-06-26 09:22:18,398 INFO: >2018-06-26 09:22:18,398 INFO: Total download size: 4.5 M >2018-06-26 09:22:18,398 INFO: Downloading packages: >2018-06-26 09:22:18,405 INFO: No Presto metadata available for rhel-7-server-rpms >2018-06-26 09:22:22,215 INFO: -------------------------------------------------------------------------------- >2018-06-26 09:22:22,215 INFO: Total 1.2 MB/s | 4.5 MB 00:03 >2018-06-26 09:22:22,224 INFO: Running transaction check >2018-06-26 09:22:22,327 INFO: Running transaction test >2018-06-26 09:22:23,542 INFO: Transaction test succeeded >2018-06-26 09:22:23,543 INFO: Running transaction >2018-06-26 09:22:25,138 INFO: Updating : perl-Git-1.8.3.1-14.el7_5.noarch 1/4 >2018-06-26 09:22:25,327 INFO: Updating : git-1.8.3.1-14.el7_5.x86_64 2/4 >2018-06-26 09:22:25,335 INFO: Cleanup : perl-Git-1.8.3.1-13.el7.noarch 3/4 >2018-06-26 09:22:26,519 INFO: Cleanup : git-1.8.3.1-13.el7.x86_64 4/4 >2018-06-26 09:22:26,533 INFO: Verifying : git-1.8.3.1-14.el7_5.x86_64 1/4 >2018-06-26 09:22:26,538 INFO: Verifying : perl-Git-1.8.3.1-14.el7_5.noarch 2/4 >2018-06-26 09:22:26,557 INFO: Verifying : git-1.8.3.1-13.el7.x86_64 3/4 >2018-06-26 09:22:26,734 INFO: Verifying : perl-Git-1.8.3.1-13.el7.noarch 4/4 >2018-06-26 09:22:26,734 INFO: >2018-06-26 09:22:26,735 INFO: Updated: >2018-06-26 09:22:26,735 INFO: git.x86_64 0:1.8.3.1-14.el7_5 perl-Git.noarch 0:1.8.3.1-14.el7_5 >2018-06-26 09:22:26,735 INFO: >2018-06-26 09:22:26,735 INFO: Complete! >2018-06-26 09:22:26,847 INFO: yum-update completed successfully >2018-06-26 09:22:26,876 INFO: Running instack >2018-06-26 09:22:27,049 INFO: INFO: 2018-06-26 09:22:27,049 -- Starting run of instack >2018-06-26 09:22:27,056 INFO: INFO: 2018-06-26 09:22:27,056 -- Using json file: /usr/share/instack-undercloud/json-files/rhel-7-undercloud-packages.json >2018-06-26 09:22:27,067 INFO: INFO: 2018-06-26 09:22:27,067 -- Running Installation >2018-06-26 09:22:27,067 INFO: INFO: 2018-06-26 09:22:27,067 -- Initialized with elements path: /usr/share/tripleo-puppet-elements /usr/share/instack-undercloud /usr/share/tripleo-image-elements /usr/share/diskimage-builder/elements >2018-06-26 09:22:27,238 INFO: WARNING: 2018-06-26 09:22:27,238 -- expand_dependencies() deprecated, use get_elements >2018-06-26 09:22:27,382 INFO: INFO: 2018-06-26 09:22:27,382 -- List of all elements and dependencies: undercloud-install dib-python source-repositories install-types puppet-modules install-bin pip-manifest puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url pkg-map enable-packages-install puppet os-apply-config hiera package-installs >2018-06-26 09:22:27,382 INFO: INFO: 2018-06-26 09:22:27,382 -- Excluding element pip-and-virtualenv >2018-06-26 09:22:27,382 INFO: INFO: 2018-06-26 09:22:27,382 -- Excluding element pip-manifest >2018-06-26 09:22:27,382 INFO: INFO: 2018-06-26 09:22:27,382 -- Excluding element package-installs >2018-06-26 09:22:27,383 INFO: INFO: 2018-06-26 09:22:27,382 -- Excluding element pkg-map >2018-06-26 09:22:27,383 INFO: INFO: 2018-06-26 09:22:27,382 -- Excluding element puppet >2018-06-26 09:22:27,383 INFO: INFO: 2018-06-26 09:22:27,382 -- Excluding element cache-url >2018-06-26 09:22:27,383 INFO: INFO: 2018-06-26 09:22:27,382 -- Excluding element dib-python >2018-06-26 09:22:27,383 INFO: INFO: 2018-06-26 09:22:27,382 -- Excluding element install-bin >2018-06-26 09:22:27,383 INFO: INFO: 2018-06-26 09:22:27,383 -- List of all elements and dependencies after excludes: undercloud-install source-repositories install-types puppet-modules puppet-stack-config os-refresh-config element-manifest manifests enable-packages-install os-apply-config hiera >2018-06-26 09:22:27,606 INFO: INFO: 2018-06-26 09:22:27,605 -- Running hook extra-data >2018-06-26 09:22:27,606 INFO: INFO: 2018-06-26 09:22:27,606 -- ############### Begin stdout/stderr logging ############### >2018-06-26 09:22:27,644 INFO: dib-run-parts Sourcing environment file /tmp/tmpw92w30/extra-data.d/../environment.d/00-dib-v2-env >2018-06-26 09:22:27,646 INFO: + source /tmp/tmpw92w30/extra-data.d/../environment.d/00-dib-v2-env >2018-06-26 09:22:27,647 INFO: ++ export 'IMAGE_ELEMENT=undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs' >2018-06-26 09:22:27,647 INFO: ++ IMAGE_ELEMENT='undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs' >2018-06-26 09:22:27,647 INFO: ++ export 'IMAGE_ELEMENT_YAML={cache-url: /usr/share/diskimage-builder/elements/cache-url, dib-python: /usr/share/diskimage-builder/elements/dib-python, >2018-06-26 09:22:27,647 INFO: element-manifest: /usr/share/diskimage-builder/elements/element-manifest, enable-packages-install: /usr/share/tripleo-image-elements/enable-packages-install, >2018-06-26 09:22:27,648 INFO: hiera: /usr/share/tripleo-puppet-elements/hiera, install-bin: /usr/share/diskimage-builder/elements/install-bin, >2018-06-26 09:22:27,648 INFO: install-types: /usr/share/diskimage-builder/elements/install-types, manifests: /usr/share/diskimage-builder/elements/manifests, >2018-06-26 09:22:27,648 INFO: os-apply-config: /usr/share/tripleo-image-elements/os-apply-config, os-refresh-config: /usr/share/tripleo-image-elements/os-refresh-config, >2018-06-26 09:22:27,648 INFO: package-installs: /usr/share/diskimage-builder/elements/package-installs, pip-and-virtualenv: /usr/share/diskimage-builder/elements/pip-and-virtualenv, >2018-06-26 09:22:27,648 INFO: pip-manifest: /usr/share/tripleo-image-elements/pip-manifest, pkg-map: /usr/share/diskimage-builder/elements/pkg-map, >2018-06-26 09:22:27,649 INFO: puppet: /usr/share/tripleo-puppet-elements/puppet, puppet-modules: /usr/share/tripleo-puppet-elements/puppet-modules, >2018-06-26 09:22:27,649 INFO: puppet-stack-config: /usr/share/instack-undercloud/puppet-stack-config, source-repositories: /usr/share/diskimage-builder/elements/source-repositories, >2018-06-26 09:22:27,649 INFO: undercloud-install: /usr/share/instack-undercloud/undercloud-install} >2018-06-26 09:22:27,649 INFO: ' >2018-06-26 09:22:27,649 INFO: ++ IMAGE_ELEMENT_YAML='{cache-url: /usr/share/diskimage-builder/elements/cache-url, dib-python: /usr/share/diskimage-builder/elements/dib-python, >2018-06-26 09:22:27,649 INFO: element-manifest: /usr/share/diskimage-builder/elements/element-manifest, enable-packages-install: /usr/share/tripleo-image-elements/enable-packages-install, >2018-06-26 09:22:27,650 INFO: hiera: /usr/share/tripleo-puppet-elements/hiera, install-bin: /usr/share/diskimage-builder/elements/install-bin, >2018-06-26 09:22:27,650 INFO: install-types: /usr/share/diskimage-builder/elements/install-types, manifests: /usr/share/diskimage-builder/elements/manifests, >2018-06-26 09:22:27,650 INFO: os-apply-config: /usr/share/tripleo-image-elements/os-apply-config, os-refresh-config: /usr/share/tripleo-image-elements/os-refresh-config, >2018-06-26 09:22:27,650 INFO: package-installs: /usr/share/diskimage-builder/elements/package-installs, pip-and-virtualenv: /usr/share/diskimage-builder/elements/pip-and-virtualenv, >2018-06-26 09:22:27,650 INFO: pip-manifest: /usr/share/tripleo-image-elements/pip-manifest, pkg-map: /usr/share/diskimage-builder/elements/pkg-map, >2018-06-26 09:22:27,651 INFO: puppet: /usr/share/tripleo-puppet-elements/puppet, puppet-modules: /usr/share/tripleo-puppet-elements/puppet-modules, >2018-06-26 09:22:27,651 INFO: puppet-stack-config: /usr/share/instack-undercloud/puppet-stack-config, source-repositories: /usr/share/diskimage-builder/elements/source-repositories, >2018-06-26 09:22:27,651 INFO: undercloud-install: /usr/share/instack-undercloud/undercloud-install} >2018-06-26 09:22:27,651 INFO: ' >2018-06-26 09:22:27,651 INFO: ++ export -f get_image_element_array >2018-06-26 09:22:27,651 INFO: + set +o xtrace >2018-06-26 09:22:27,651 INFO: dib-run-parts Sourcing environment file /tmp/tmpw92w30/extra-data.d/../environment.d/01-export-install-types.bash >2018-06-26 09:22:27,652 INFO: + source /tmp/tmpw92w30/extra-data.d/../environment.d/01-export-install-types.bash >2018-06-26 09:22:27,652 INFO: ++ export DIB_DEFAULT_INSTALLTYPE=package >2018-06-26 09:22:27,652 INFO: ++ DIB_DEFAULT_INSTALLTYPE=package >2018-06-26 09:22:27,652 INFO: + set +o xtrace >2018-06-26 09:22:27,652 INFO: dib-run-parts Sourcing environment file /tmp/tmpw92w30/extra-data.d/../environment.d/01-puppet-module-pins.sh >2018-06-26 09:22:27,652 INFO: + source /tmp/tmpw92w30/extra-data.d/../environment.d/01-puppet-module-pins.sh >2018-06-26 09:22:27,653 INFO: ++ export DIB_REPOREF_puppetlabs_ntp=4.2.x >2018-06-26 09:22:27,653 INFO: ++ DIB_REPOREF_puppetlabs_ntp=4.2.x >2018-06-26 09:22:27,653 INFO: + set +o xtrace >2018-06-26 09:22:27,653 INFO: dib-run-parts Sourcing environment file /tmp/tmpw92w30/extra-data.d/../environment.d/02-puppet-modules-install-types.sh >2018-06-26 09:22:27,653 INFO: + source /tmp/tmpw92w30/extra-data.d/../environment.d/02-puppet-modules-install-types.sh >2018-06-26 09:22:27,653 INFO: ++ DIB_DEFAULT_INSTALLTYPE=package >2018-06-26 09:22:27,654 INFO: ++ DIB_INSTALLTYPE_puppet_modules=package >2018-06-26 09:22:27,654 INFO: ++ '[' package = source ']' >2018-06-26 09:22:27,654 INFO: + set +o xtrace >2018-06-26 09:22:27,654 INFO: dib-run-parts Sourcing environment file /tmp/tmpw92w30/extra-data.d/../environment.d/10-os-apply-config-venv-dir.bash >2018-06-26 09:22:27,655 INFO: + source /tmp/tmpw92w30/extra-data.d/../environment.d/10-os-apply-config-venv-dir.bash >2018-06-26 09:22:27,655 INFO: ++ '[' -z '' ']' >2018-06-26 09:22:27,656 INFO: ++ export OS_APPLY_CONFIG_VENV_DIR=/opt/stack/venvs/os-apply-config >2018-06-26 09:22:27,656 INFO: ++ OS_APPLY_CONFIG_VENV_DIR=/opt/stack/venvs/os-apply-config >2018-06-26 09:22:27,656 INFO: + set +o xtrace >2018-06-26 09:22:27,656 INFO: dib-run-parts Sourcing environment file /tmp/tmpw92w30/extra-data.d/../environment.d/14-manifests >2018-06-26 09:22:27,658 INFO: + source /tmp/tmpw92w30/extra-data.d/../environment.d/14-manifests >2018-06-26 09:22:27,658 INFO: ++ export DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests >2018-06-26 09:22:27,658 INFO: ++ DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests >2018-06-26 09:22:27,658 INFO: ++ export DIB_MANIFEST_SAVE_DIR=instack.d/ >2018-06-26 09:22:27,659 INFO: ++ DIB_MANIFEST_SAVE_DIR=instack.d/ >2018-06-26 09:22:27,659 INFO: + set +o xtrace >2018-06-26 09:22:27,659 INFO: dib-run-parts Running /tmp/tmpw92w30/extra-data.d/10-install-git >2018-06-26 09:22:27,661 INFO: + yum -y install git >2018-06-26 09:22:27,816 INFO: Loaded plugins: langpacks, product-id, search-disabled-repos, subscription- >2018-06-26 09:22:27,816 INFO: : manager >2018-06-26 09:22:32,127 INFO: Package git-1.8.3.1-14.el7_5.x86_64 already installed and latest version >2018-06-26 09:22:32,127 INFO: Nothing to do >2018-06-26 09:22:32,178 INFO: dib-run-parts 10-install-git completed >2018-06-26 09:22:32,178 INFO: dib-run-parts Running /tmp/tmpw92w30/extra-data.d/20-manifest-dir >2018-06-26 09:22:32,182 INFO: + set -eu >2018-06-26 09:22:32,182 INFO: + set -o pipefail >2018-06-26 09:22:32,182 INFO: + sudo mkdir -p /tmp/instack.aTNXlO/mnt//etc/dib-manifests >2018-06-26 09:22:32,202 INFO: dib-run-parts 20-manifest-dir completed >2018-06-26 09:22:32,202 INFO: dib-run-parts Running /tmp/tmpw92w30/extra-data.d/75-inject-element-manifest >2018-06-26 09:22:32,205 INFO: + set -eu >2018-06-26 09:22:32,205 INFO: + set -o pipefail >2018-06-26 09:22:32,205 INFO: + DIB_ELEMENT_MANIFEST_PATH=/etc/dib-manifests/dib-element-manifest >2018-06-26 09:22:32,206 INFO: ++ dirname /etc/dib-manifests/dib-element-manifest >2018-06-26 09:22:32,206 INFO: + sudo mkdir -p /tmp/instack.aTNXlO/mnt//etc/dib-manifests >2018-06-26 09:22:32,218 INFO: + sudo /bin/bash -c 'echo undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs | tr '\'' '\'' '\''\n'\'' > /tmp/instack.aTNXlO/mnt//etc/dib-manifests/dib-element-manifest' >2018-06-26 09:22:32,234 INFO: dib-run-parts 75-inject-element-manifest completed >2018-06-26 09:22:32,234 INFO: dib-run-parts Running /tmp/tmpw92w30/extra-data.d/98-source-repositories >2018-06-26 09:22:32,246 INFO: Getting /root/.cache/image-create/source-repositories/repositories_flock: Tue Jun 26 09:22:32 IST 2018 for /tmp/tmpw92w30/source-repository-puppet-modules >2018-06-26 09:22:32,250 INFO: (0001 / 0081) >2018-06-26 09:22:32,263 INFO: puppetlabs-apache install type not set to source >2018-06-26 09:22:32,263 INFO: (0002 / 0081) >2018-06-26 09:22:32,267 INFO: puppet-aodh install type not set to source >2018-06-26 09:22:32,268 INFO: (0003 / 0081) >2018-06-26 09:22:32,272 INFO: puppet-auditd install type not set to source >2018-06-26 09:22:32,273 INFO: (0004 / 0081) >2018-06-26 09:22:32,277 INFO: puppet-barbican install type not set to source >2018-06-26 09:22:32,278 INFO: (0005 / 0081) >2018-06-26 09:22:32,282 INFO: puppet-cassandra install type not set to source >2018-06-26 09:22:32,283 INFO: (0006 / 0081) >2018-06-26 09:22:32,287 INFO: puppet-ceph install type not set to source >2018-06-26 09:22:32,288 INFO: (0007 / 0081) >2018-06-26 09:22:32,292 INFO: puppet-ceilometer install type not set to source >2018-06-26 09:22:32,293 INFO: (0008 / 0081) >2018-06-26 09:22:32,297 INFO: puppet-congress install type not set to source >2018-06-26 09:22:32,298 INFO: (0009 / 0081) >2018-06-26 09:22:32,302 INFO: puppet-gnocchi install type not set to source >2018-06-26 09:22:32,303 INFO: (0010 / 0081) >2018-06-26 09:22:32,307 INFO: puppet-certmonger install type not set to source >2018-06-26 09:22:32,307 INFO: (0011 / 0081) >2018-06-26 09:22:32,311 INFO: puppet-cinder install type not set to source >2018-06-26 09:22:32,312 INFO: (0012 / 0081) >2018-06-26 09:22:32,316 INFO: puppet-common install type not set to source >2018-06-26 09:22:32,317 INFO: (0013 / 0081) >2018-06-26 09:22:32,321 INFO: puppet-contrail install type not set to source >2018-06-26 09:22:32,322 INFO: (0014 / 0081) >2018-06-26 09:22:32,326 INFO: puppetlabs-concat install type not set to source >2018-06-26 09:22:32,327 INFO: (0015 / 0081) >2018-06-26 09:22:32,331 INFO: puppetlabs-firewall install type not set to source >2018-06-26 09:22:32,331 INFO: (0016 / 0081) >2018-06-26 09:22:32,335 INFO: puppet-glance install type not set to source >2018-06-26 09:22:32,336 INFO: (0017 / 0081) >2018-06-26 09:22:32,341 INFO: puppet-gluster install type not set to source >2018-06-26 09:22:32,341 INFO: (0018 / 0081) >2018-06-26 09:22:32,345 INFO: puppetlabs-haproxy install type not set to source >2018-06-26 09:22:32,346 INFO: (0019 / 0081) >2018-06-26 09:22:32,350 INFO: puppet-heat install type not set to source >2018-06-26 09:22:32,351 INFO: (0020 / 0081) >2018-06-26 09:22:32,355 INFO: puppet-healthcheck install type not set to source >2018-06-26 09:22:32,356 INFO: (0021 / 0081) >2018-06-26 09:22:32,360 INFO: puppet-horizon install type not set to source >2018-06-26 09:22:32,360 INFO: (0022 / 0081) >2018-06-26 09:22:32,364 INFO: puppetlabs-inifile install type not set to source >2018-06-26 09:22:32,365 INFO: (0023 / 0081) >2018-06-26 09:22:32,369 INFO: puppet-kafka install type not set to source >2018-06-26 09:22:32,370 INFO: (0024 / 0081) >2018-06-26 09:22:32,374 INFO: puppet-keystone install type not set to source >2018-06-26 09:22:32,375 INFO: (0025 / 0081) >2018-06-26 09:22:32,379 INFO: puppet-manila install type not set to source >2018-06-26 09:22:32,379 INFO: (0026 / 0081) >2018-06-26 09:22:32,384 INFO: puppet-memcached install type not set to source >2018-06-26 09:22:32,384 INFO: (0027 / 0081) >2018-06-26 09:22:32,389 INFO: puppet-mistral install type not set to source >2018-06-26 09:22:32,389 INFO: (0028 / 0081) >2018-06-26 09:22:32,393 INFO: puppetlabs-mongodb install type not set to source >2018-06-26 09:22:32,394 INFO: (0029 / 0081) >2018-06-26 09:22:32,398 INFO: puppetlabs-mysql install type not set to source >2018-06-26 09:22:32,398 INFO: (0030 / 0081) >2018-06-26 09:22:32,402 INFO: puppet-neutron install type not set to source >2018-06-26 09:22:32,403 INFO: (0031 / 0081) >2018-06-26 09:22:32,407 INFO: puppet-nova install type not set to source >2018-06-26 09:22:32,408 INFO: (0032 / 0081) >2018-06-26 09:22:32,412 INFO: puppet-octavia install type not set to source >2018-06-26 09:22:32,413 INFO: (0033 / 0081) >2018-06-26 09:22:32,417 INFO: puppet-oslo install type not set to source >2018-06-26 09:22:32,418 INFO: (0034 / 0081) >2018-06-26 09:22:32,422 INFO: puppet-nssdb install type not set to source >2018-06-26 09:22:32,423 INFO: (0035 / 0081) >2018-06-26 09:22:32,426 INFO: puppet-opendaylight install type not set to source >2018-06-26 09:22:32,427 INFO: (0036 / 0081) >2018-06-26 09:22:32,431 INFO: puppet-ovn install type not set to source >2018-06-26 09:22:32,432 INFO: (0037 / 0081) >2018-06-26 09:22:32,436 INFO: puppet-panko install type not set to source >2018-06-26 09:22:32,437 INFO: (0038 / 0081) >2018-06-26 09:22:32,441 INFO: puppet-puppet install type not set to source >2018-06-26 09:22:32,441 INFO: (0039 / 0081) >2018-06-26 09:22:32,445 INFO: puppetlabs-rabbitmq install type not set to source >2018-06-26 09:22:32,446 INFO: (0040 / 0081) >2018-06-26 09:22:32,451 INFO: puppet-redis install type not set to source >2018-06-26 09:22:32,451 INFO: (0041 / 0081) >2018-06-26 09:22:32,455 INFO: puppetlabs-rsync install type not set to source >2018-06-26 09:22:32,456 INFO: (0042 / 0081) >2018-06-26 09:22:32,460 INFO: puppet-sahara install type not set to source >2018-06-26 09:22:32,461 INFO: (0043 / 0081) >2018-06-26 09:22:32,465 INFO: sensu-puppet install type not set to source >2018-06-26 09:22:32,466 INFO: (0044 / 0081) >2018-06-26 09:22:32,470 INFO: puppet-tacker install type not set to source >2018-06-26 09:22:32,470 INFO: (0045 / 0081) >2018-06-26 09:22:32,474 INFO: puppet-trove install type not set to source >2018-06-26 09:22:32,475 INFO: (0046 / 0081) >2018-06-26 09:22:32,479 INFO: puppet-ssh install type not set to source >2018-06-26 09:22:32,480 INFO: (0047 / 0081) >2018-06-26 09:22:32,484 INFO: puppet-staging install type not set to source >2018-06-26 09:22:32,484 INFO: (0048 / 0081) >2018-06-26 09:22:32,489 INFO: puppetlabs-stdlib install type not set to source >2018-06-26 09:22:32,489 INFO: (0049 / 0081) >2018-06-26 09:22:32,493 INFO: puppet-swift install type not set to source >2018-06-26 09:22:32,494 INFO: (0050 / 0081) >2018-06-26 09:22:32,498 INFO: puppetlabs-sysctl install type not set to source >2018-06-26 09:22:32,498 INFO: (0051 / 0081) >2018-06-26 09:22:32,502 INFO: puppet-timezone install type not set to source >2018-06-26 09:22:32,503 INFO: (0052 / 0081) >2018-06-26 09:22:32,507 INFO: puppet-uchiwa install type not set to source >2018-06-26 09:22:32,508 INFO: (0053 / 0081) >2018-06-26 09:22:32,512 INFO: puppetlabs-vcsrepo install type not set to source >2018-06-26 09:22:32,513 INFO: (0054 / 0081) >2018-06-26 09:22:32,517 INFO: puppet-vlan install type not set to source >2018-06-26 09:22:32,518 INFO: (0055 / 0081) >2018-06-26 09:22:32,522 INFO: puppet-vswitch install type not set to source >2018-06-26 09:22:32,522 INFO: (0056 / 0081) >2018-06-26 09:22:32,526 INFO: puppetlabs-xinetd install type not set to source >2018-06-26 09:22:32,527 INFO: (0057 / 0081) >2018-06-26 09:22:32,531 INFO: puppet-zookeeper install type not set to source >2018-06-26 09:22:32,532 INFO: (0058 / 0081) >2018-06-26 09:22:32,536 INFO: puppet-openstacklib install type not set to source >2018-06-26 09:22:32,537 INFO: (0059 / 0081) >2018-06-26 09:22:32,541 INFO: puppet-module-keepalived install type not set to source >2018-06-26 09:22:32,541 INFO: (0060 / 0081) >2018-06-26 09:22:32,546 INFO: puppetlabs-ntp install type not set to source >2018-06-26 09:22:32,546 INFO: (0061 / 0081) >2018-06-26 09:22:32,550 INFO: puppet-snmp install type not set to source >2018-06-26 09:22:32,551 INFO: (0062 / 0081) >2018-06-26 09:22:32,555 INFO: puppet-tripleo install type not set to source >2018-06-26 09:22:32,556 INFO: (0063 / 0081) >2018-06-26 09:22:32,560 INFO: puppet-ironic install type not set to source >2018-06-26 09:22:32,561 INFO: (0064 / 0081) >2018-06-26 09:22:32,565 INFO: puppet-ipaclient install type not set to source >2018-06-26 09:22:32,566 INFO: (0065 / 0081) >2018-06-26 09:22:32,570 INFO: puppetlabs-corosync install type not set to source >2018-06-26 09:22:32,571 INFO: (0066 / 0081) >2018-06-26 09:22:32,574 INFO: puppet-pacemaker install type not set to source >2018-06-26 09:22:32,575 INFO: (0067 / 0081) >2018-06-26 09:22:32,579 INFO: puppet_aviator install type not set to source >2018-06-26 09:22:32,580 INFO: (0068 / 0081) >2018-06-26 09:22:32,584 INFO: puppet-openstack_extras install type not set to source >2018-06-26 09:22:32,585 INFO: (0069 / 0081) >2018-06-26 09:22:32,589 INFO: konstantin-fluentd install type not set to source >2018-06-26 09:22:32,590 INFO: (0070 / 0081) >2018-06-26 09:22:32,594 INFO: puppet-elasticsearch install type not set to source >2018-06-26 09:22:32,595 INFO: (0071 / 0081) >2018-06-26 09:22:32,599 INFO: puppet-kibana3 install type not set to source >2018-06-26 09:22:32,599 INFO: (0072 / 0081) >2018-06-26 09:22:32,604 INFO: puppetlabs-git install type not set to source >2018-06-26 09:22:32,604 INFO: (0073 / 0081) >2018-06-26 09:22:32,608 INFO: puppet-datacat install type not set to source >2018-06-26 09:22:32,609 INFO: (0074 / 0081) >2018-06-26 09:22:32,613 INFO: puppet-kmod install type not set to source >2018-06-26 09:22:32,614 INFO: (0075 / 0081) >2018-06-26 09:22:32,618 INFO: puppet-zaqar install type not set to source >2018-06-26 09:22:32,619 INFO: (0076 / 0081) >2018-06-26 09:22:32,623 INFO: puppet-ec2api install type not set to source >2018-06-26 09:22:32,624 INFO: (0077 / 0081) >2018-06-26 09:22:32,628 INFO: puppet-qdr install type not set to source >2018-06-26 09:22:32,628 INFO: (0078 / 0081) >2018-06-26 09:22:32,632 INFO: puppet-systemd install type not set to source >2018-06-26 09:22:32,633 INFO: (0079 / 0081) >2018-06-26 09:22:32,637 INFO: puppet-etcd install type not set to source >2018-06-26 09:22:32,638 INFO: (0080 / 0081) >2018-06-26 09:22:32,642 INFO: puppet-veritas_hyperscale install type not set to source >2018-06-26 09:22:32,643 INFO: (0081 / 0081) >2018-06-26 09:22:32,647 INFO: puppet-ptp install type not set to source >2018-06-26 09:22:32,649 INFO: dib-run-parts 98-source-repositories completed >2018-06-26 09:22:32,649 INFO: dib-run-parts Running /tmp/tmpw92w30/extra-data.d/99-enable-install-types >2018-06-26 09:22:32,652 INFO: + set -eu >2018-06-26 09:22:32,652 INFO: + set -o pipefail >2018-06-26 09:22:32,652 INFO: + declare -a SPECIFIED_ELEMS >2018-06-26 09:22:32,652 INFO: + SPECIFIED_ELEMS[0]= >2018-06-26 09:22:32,652 INFO: + PREFIX=DIB_INSTALLTYPE_ >2018-06-26 09:22:32,652 INFO: ++ env >2018-06-26 09:22:32,652 INFO: ++ grep '^DIB_INSTALLTYPE_' >2018-06-26 09:22:32,653 INFO: ++ cut -d= -f1 >2018-06-26 09:22:32,656 INFO: ++ echo '' >2018-06-26 09:22:32,656 INFO: + INSTALL_TYPE_VARS= >2018-06-26 09:22:32,656 INFO: ++ find /tmp/tmpw92w30/install.d -maxdepth 1 -name '*-package-install' -type d >2018-06-26 09:22:32,658 INFO: + default_install_type_dirs=/tmp/tmpw92w30/install.d/puppet-modules-package-install >2018-06-26 09:22:32,658 INFO: + for _install_dir in '$default_install_type_dirs' >2018-06-26 09:22:32,658 INFO: + SUFFIX=-package-install >2018-06-26 09:22:32,658 INFO: ++ basename /tmp/tmpw92w30/install.d/puppet-modules-package-install >2018-06-26 09:22:32,659 INFO: + _install_dir=puppet-modules-package-install >2018-06-26 09:22:32,659 INFO: + INSTALLDIRPREFIX=puppet-modules >2018-06-26 09:22:32,659 INFO: + found=0 >2018-06-26 09:22:32,659 INFO: + '[' 0 = 0 ']' >2018-06-26 09:22:32,659 INFO: + pushd /tmp/tmpw92w30/install.d >2018-06-26 09:22:32,659 INFO: /tmp/tmpw92w30/install.d /home/sudheer >2018-06-26 09:22:32,659 INFO: + ln -sf puppet-modules-package-install/75-puppet-modules-package . >2018-06-26 09:22:32,660 INFO: + popd >2018-06-26 09:22:32,660 INFO: /home/sudheer >2018-06-26 09:22:32,662 INFO: dib-run-parts 99-enable-install-types completed >2018-06-26 09:22:32,662 INFO: dib-run-parts ----------------------- PROFILING ----------------------- >2018-06-26 09:22:32,662 INFO: dib-run-parts >2018-06-26 09:22:32,663 INFO: dib-run-parts Target: extra-data.d >2018-06-26 09:22:32,664 INFO: dib-run-parts >2018-06-26 09:22:32,664 INFO: dib-run-parts Script Seconds >2018-06-26 09:22:32,664 INFO: dib-run-parts --------------------------------------- ---------- >2018-06-26 09:22:32,664 INFO: dib-run-parts >2018-06-26 09:22:32,670 INFO: dib-run-parts 10-install-git 4.518 >2018-06-26 09:22:32,674 INFO: dib-run-parts 20-manifest-dir 0.022 >2018-06-26 09:22:32,679 INFO: dib-run-parts 75-inject-element-manifest 0.031 >2018-06-26 09:22:32,683 INFO: dib-run-parts 98-source-repositories 0.413 >2018-06-26 09:22:32,688 INFO: dib-run-parts 99-enable-install-types 0.012 >2018-06-26 09:22:32,690 INFO: dib-run-parts >2018-06-26 09:22:32,690 INFO: dib-run-parts --------------------- END PROFILING --------------------- >2018-06-26 09:22:32,690 INFO: INFO: 2018-06-26 09:22:32,690 -- ############### End stdout/stderr logging ############### >2018-06-26 09:22:32,691 INFO: INFO: 2018-06-26 09:22:32,690 -- Running hook pre-install >2018-06-26 09:22:32,691 INFO: INFO: 2018-06-26 09:22:32,691 -- Skipping hook pre-install, the hook directory doesn't exist at /tmp/tmpw92w30/pre-install.d >2018-06-26 09:22:32,691 INFO: INFO: 2018-06-26 09:22:32,691 -- Running hook install >2018-06-26 09:22:32,691 INFO: INFO: 2018-06-26 09:22:32,691 -- ############### Begin stdout/stderr logging ############### >2018-06-26 09:22:32,703 INFO: dib-run-parts Sourcing environment file /tmp/tmpw92w30/install.d/../environment.d/00-dib-v2-env >2018-06-26 09:22:32,705 INFO: + source /tmp/tmpw92w30/install.d/../environment.d/00-dib-v2-env >2018-06-26 09:22:32,705 INFO: ++ export 'IMAGE_ELEMENT=undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs' >2018-06-26 09:22:32,705 INFO: ++ IMAGE_ELEMENT='undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs' >2018-06-26 09:22:32,705 INFO: ++ export 'IMAGE_ELEMENT_YAML={cache-url: /usr/share/diskimage-builder/elements/cache-url, dib-python: /usr/share/diskimage-builder/elements/dib-python, >2018-06-26 09:22:32,706 INFO: element-manifest: /usr/share/diskimage-builder/elements/element-manifest, enable-packages-install: /usr/share/tripleo-image-elements/enable-packages-install, >2018-06-26 09:22:32,706 INFO: hiera: /usr/share/tripleo-puppet-elements/hiera, install-bin: /usr/share/diskimage-builder/elements/install-bin, >2018-06-26 09:22:32,706 INFO: install-types: /usr/share/diskimage-builder/elements/install-types, manifests: /usr/share/diskimage-builder/elements/manifests, >2018-06-26 09:22:32,706 INFO: os-apply-config: /usr/share/tripleo-image-elements/os-apply-config, os-refresh-config: /usr/share/tripleo-image-elements/os-refresh-config, >2018-06-26 09:22:32,706 INFO: package-installs: /usr/share/diskimage-builder/elements/package-installs, pip-and-virtualenv: /usr/share/diskimage-builder/elements/pip-and-virtualenv, >2018-06-26 09:22:32,707 INFO: pip-manifest: /usr/share/tripleo-image-elements/pip-manifest, pkg-map: /usr/share/diskimage-builder/elements/pkg-map, >2018-06-26 09:22:32,707 INFO: puppet: /usr/share/tripleo-puppet-elements/puppet, puppet-modules: /usr/share/tripleo-puppet-elements/puppet-modules, >2018-06-26 09:22:32,707 INFO: puppet-stack-config: /usr/share/instack-undercloud/puppet-stack-config, source-repositories: /usr/share/diskimage-builder/elements/source-repositories, >2018-06-26 09:22:32,707 INFO: undercloud-install: /usr/share/instack-undercloud/undercloud-install} >2018-06-26 09:22:32,707 INFO: ' >2018-06-26 09:22:32,707 INFO: ++ IMAGE_ELEMENT_YAML='{cache-url: /usr/share/diskimage-builder/elements/cache-url, dib-python: /usr/share/diskimage-builder/elements/dib-python, >2018-06-26 09:22:32,708 INFO: element-manifest: /usr/share/diskimage-builder/elements/element-manifest, enable-packages-install: /usr/share/tripleo-image-elements/enable-packages-install, >2018-06-26 09:22:32,708 INFO: hiera: /usr/share/tripleo-puppet-elements/hiera, install-bin: /usr/share/diskimage-builder/elements/install-bin, >2018-06-26 09:22:32,708 INFO: install-types: /usr/share/diskimage-builder/elements/install-types, manifests: /usr/share/diskimage-builder/elements/manifests, >2018-06-26 09:22:32,708 INFO: os-apply-config: /usr/share/tripleo-image-elements/os-apply-config, os-refresh-config: /usr/share/tripleo-image-elements/os-refresh-config, >2018-06-26 09:22:32,708 INFO: package-installs: /usr/share/diskimage-builder/elements/package-installs, pip-and-virtualenv: /usr/share/diskimage-builder/elements/pip-and-virtualenv, >2018-06-26 09:22:32,708 INFO: pip-manifest: /usr/share/tripleo-image-elements/pip-manifest, pkg-map: /usr/share/diskimage-builder/elements/pkg-map, >2018-06-26 09:22:32,709 INFO: puppet: /usr/share/tripleo-puppet-elements/puppet, puppet-modules: /usr/share/tripleo-puppet-elements/puppet-modules, >2018-06-26 09:22:32,709 INFO: puppet-stack-config: /usr/share/instack-undercloud/puppet-stack-config, source-repositories: /usr/share/diskimage-builder/elements/source-repositories, >2018-06-26 09:22:32,709 INFO: undercloud-install: /usr/share/instack-undercloud/undercloud-install} >2018-06-26 09:22:32,709 INFO: ' >2018-06-26 09:22:32,709 INFO: ++ export -f get_image_element_array >2018-06-26 09:22:32,709 INFO: + set +o xtrace >2018-06-26 09:22:32,709 INFO: dib-run-parts Sourcing environment file /tmp/tmpw92w30/install.d/../environment.d/01-export-install-types.bash >2018-06-26 09:22:32,710 INFO: + source /tmp/tmpw92w30/install.d/../environment.d/01-export-install-types.bash >2018-06-26 09:22:32,710 INFO: ++ export DIB_DEFAULT_INSTALLTYPE=package >2018-06-26 09:22:32,710 INFO: ++ DIB_DEFAULT_INSTALLTYPE=package >2018-06-26 09:22:32,710 INFO: + set +o xtrace >2018-06-26 09:22:32,710 INFO: dib-run-parts Sourcing environment file /tmp/tmpw92w30/install.d/../environment.d/01-puppet-module-pins.sh >2018-06-26 09:22:32,710 INFO: + source /tmp/tmpw92w30/install.d/../environment.d/01-puppet-module-pins.sh >2018-06-26 09:22:32,710 INFO: ++ export DIB_REPOREF_puppetlabs_ntp=4.2.x >2018-06-26 09:22:32,710 INFO: ++ DIB_REPOREF_puppetlabs_ntp=4.2.x >2018-06-26 09:22:32,711 INFO: + set +o xtrace >2018-06-26 09:22:32,711 INFO: dib-run-parts Sourcing environment file /tmp/tmpw92w30/install.d/../environment.d/02-puppet-modules-install-types.sh >2018-06-26 09:22:32,711 INFO: + source /tmp/tmpw92w30/install.d/../environment.d/02-puppet-modules-install-types.sh >2018-06-26 09:22:32,711 INFO: ++ DIB_DEFAULT_INSTALLTYPE=package >2018-06-26 09:22:32,711 INFO: ++ DIB_INSTALLTYPE_puppet_modules=package >2018-06-26 09:22:32,711 INFO: ++ '[' package = source ']' >2018-06-26 09:22:32,711 INFO: + set +o xtrace >2018-06-26 09:22:32,712 INFO: dib-run-parts Sourcing environment file /tmp/tmpw92w30/install.d/../environment.d/10-os-apply-config-venv-dir.bash >2018-06-26 09:22:32,713 INFO: + source /tmp/tmpw92w30/install.d/../environment.d/10-os-apply-config-venv-dir.bash >2018-06-26 09:22:32,713 INFO: ++ '[' -z '' ']' >2018-06-26 09:22:32,713 INFO: ++ export OS_APPLY_CONFIG_VENV_DIR=/opt/stack/venvs/os-apply-config >2018-06-26 09:22:32,714 INFO: ++ OS_APPLY_CONFIG_VENV_DIR=/opt/stack/venvs/os-apply-config >2018-06-26 09:22:32,714 INFO: + set +o xtrace >2018-06-26 09:22:32,714 INFO: dib-run-parts Sourcing environment file /tmp/tmpw92w30/install.d/../environment.d/14-manifests >2018-06-26 09:22:32,715 INFO: + source /tmp/tmpw92w30/install.d/../environment.d/14-manifests >2018-06-26 09:22:32,715 INFO: ++ export DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests >2018-06-26 09:22:32,716 INFO: ++ DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests >2018-06-26 09:22:32,716 INFO: ++ export DIB_MANIFEST_SAVE_DIR=instack.d/ >2018-06-26 09:22:32,716 INFO: ++ DIB_MANIFEST_SAVE_DIR=instack.d/ >2018-06-26 09:22:32,716 INFO: + set +o xtrace >2018-06-26 09:22:32,716 INFO: dib-run-parts Running /tmp/tmpw92w30/install.d/02-puppet-stack-config >2018-06-26 09:22:33,316 INFO: dib-run-parts 02-puppet-stack-config completed >2018-06-26 09:22:33,316 INFO: dib-run-parts Running /tmp/tmpw92w30/install.d/10-hiera-yaml-symlink >2018-06-26 09:22:33,319 INFO: + set -o pipefail >2018-06-26 09:22:33,319 INFO: + ln -f -s /etc/puppet/hiera.yaml /etc/hiera.yaml >2018-06-26 09:22:33,322 INFO: dib-run-parts 10-hiera-yaml-symlink completed >2018-06-26 09:22:33,322 INFO: dib-run-parts Running /tmp/tmpw92w30/install.d/10-puppet-stack-config-puppet-module >2018-06-26 09:22:33,325 INFO: + set -o pipefail >2018-06-26 09:22:33,325 INFO: + mkdir -p /etc/puppet/manifests >2018-06-26 09:22:33,326 INFO: ++ dirname /tmp/tmpw92w30/install.d/10-puppet-stack-config-puppet-module >2018-06-26 09:22:33,327 INFO: + cp /tmp/tmpw92w30/install.d/../puppet-stack-config.pp /etc/puppet/manifests/puppet-stack-config.pp >2018-06-26 09:22:33,331 INFO: dib-run-parts 10-puppet-stack-config-puppet-module completed >2018-06-26 09:22:33,332 INFO: dib-run-parts Running /tmp/tmpw92w30/install.d/11-create-template-root >2018-06-26 09:22:33,335 INFO: ++ os-apply-config --print-templates >2018-06-26 09:22:33,457 INFO: + TEMPLATE_ROOT=/usr/libexec/os-apply-config/templates >2018-06-26 09:22:33,457 INFO: + mkdir -p /usr/libexec/os-apply-config/templates >2018-06-26 09:22:33,460 INFO: dib-run-parts 11-create-template-root completed >2018-06-26 09:22:33,460 INFO: dib-run-parts Running /tmp/tmpw92w30/install.d/11-hiera-orc-install >2018-06-26 09:22:33,463 INFO: + set -o pipefail >2018-06-26 09:22:33,463 INFO: + mkdir -p /usr/libexec/os-refresh-config/configure.d/ >2018-06-26 09:22:33,478 INFO: ++ dirname /tmp/tmpw92w30/install.d/11-hiera-orc-install >2018-06-26 09:22:33,479 INFO: + install -m 0755 -o root -g root /tmp/tmpw92w30/install.d/../10-hiera-disable /usr/libexec/os-refresh-config/configure.d/10-hiera-disable >2018-06-26 09:22:33,485 INFO: ++ dirname /tmp/tmpw92w30/install.d/11-hiera-orc-install >2018-06-26 09:22:33,486 INFO: + install -m 0755 -o root -g root /tmp/tmpw92w30/install.d/../40-hiera-datafiles /usr/libexec/os-refresh-config/configure.d/40-hiera-datafiles >2018-06-26 09:22:33,492 INFO: dib-run-parts 11-hiera-orc-install completed >2018-06-26 09:22:33,492 INFO: dib-run-parts Running /tmp/tmpw92w30/install.d/75-puppet-modules-package >2018-06-26 09:22:33,495 INFO: + read >2018-06-26 09:22:33,495 INFO: + find /opt/stack/puppet-modules/ -mindepth 1 >2018-06-26 09:22:33,499 INFO: + ln -f -s /usr/share/openstack-puppet/modules/aodh /usr/share/openstack-puppet/modules/apache /usr/share/openstack-puppet/modules/archive /usr/share/openstack-puppet/modules/auditd /usr/share/openstack-puppet/modules/barbican /usr/share/openstack-puppet/modules/cassandra /usr/share/openstack-puppet/modules/ceilometer /usr/share/openstack-puppet/modules/ceph /usr/share/openstack-puppet/modules/certmonger /usr/share/openstack-puppet/modules/cinder /usr/share/openstack-puppet/modules/collectd /usr/share/openstack-puppet/modules/concat /usr/share/openstack-puppet/modules/contrail /usr/share/openstack-puppet/modules/corosync /usr/share/openstack-puppet/modules/datacat /usr/share/openstack-puppet/modules/designate /usr/share/openstack-puppet/modules/dns /usr/share/openstack-puppet/modules/ec2api /usr/share/openstack-puppet/modules/elasticsearch /usr/share/openstack-puppet/modules/fdio /usr/share/openstack-puppet/modules/firewall /usr/share/openstack-puppet/modules/fluentd /usr/share/openstack-puppet/modules/git /usr/share/openstack-puppet/modules/glance /usr/share/openstack-puppet/modules/gnocchi /usr/share/openstack-puppet/modules/haproxy /usr/share/openstack-puppet/modules/heat /usr/share/openstack-puppet/modules/horizon /usr/share/openstack-puppet/modules/inifile /usr/share/openstack-puppet/modules/ipaclient /usr/share/openstack-puppet/modules/ironic /usr/share/openstack-puppet/modules/java /usr/share/openstack-puppet/modules/kafka /usr/share/openstack-puppet/modules/keepalived /usr/share/openstack-puppet/modules/keystone /usr/share/openstack-puppet/modules/kibana3 /usr/share/openstack-puppet/modules/kmod /usr/share/openstack-puppet/modules/manila /usr/share/openstack-puppet/modules/memcached /usr/share/openstack-puppet/modules/midonet /usr/share/openstack-puppet/modules/mistral /usr/share/openstack-puppet/modules/module-data /usr/share/openstack-puppet/modules/mysql /usr/share/openstack-puppet/modules/n1k_vsm /usr/share/openstack-puppet/modules/neutron /usr/share/openstack-puppet/modules/nova /usr/share/openstack-puppet/modules/nssdb /usr/share/openstack-puppet/modules/ntp /usr/share/openstack-puppet/modules/octavia /usr/share/openstack-puppet/modules/opendaylight /usr/share/openstack-puppet/modules/openstack_extras /usr/share/openstack-puppet/modules/openstacklib /usr/share/openstack-puppet/modules/oslo /usr/share/openstack-puppet/modules/ovn /usr/share/openstack-puppet/modules/pacemaker /usr/share/openstack-puppet/modules/panko /usr/share/openstack-puppet/modules/rabbitmq /usr/share/openstack-puppet/modules/redis /usr/share/openstack-puppet/modules/remote /usr/share/openstack-puppet/modules/rsync /usr/share/openstack-puppet/modules/sahara /usr/share/openstack-puppet/modules/sensu /usr/share/openstack-puppet/modules/snmp /usr/share/openstack-puppet/modules/ssh /usr/share/openstack-puppet/modules/staging /usr/share/openstack-puppet/modules/stdlib /usr/share/openstack-puppet/modules/swift /usr/share/openstack-puppet/modules/sysctl /usr/share/openstack-puppet/modules/systemd /usr/share/openstack-puppet/modules/timezone /usr/share/openstack-puppet/modules/tomcat /usr/share/openstack-puppet/modules/tripleo /usr/share/openstack-puppet/modules/trove /usr/share/openstack-puppet/modules/uchiwa /usr/share/openstack-puppet/modules/vcsrepo /usr/share/openstack-puppet/modules/veritas_hyperscale /usr/share/openstack-puppet/modules/vswitch /usr/share/openstack-puppet/modules/xinetd /usr/share/openstack-puppet/modules/zaqar /usr/share/openstack-puppet/modules/zookeeper /etc/puppet/modules/ >2018-06-26 09:22:33,516 INFO: dib-run-parts 75-puppet-modules-package completed >2018-06-26 09:22:33,516 INFO: dib-run-parts Running /tmp/tmpw92w30/install.d/99-install-config-templates >2018-06-26 09:22:33,520 INFO: ++ os-apply-config --print-templates >2018-06-26 09:22:33,622 INFO: + TEMPLATE_ROOT=/usr/libexec/os-apply-config/templates >2018-06-26 09:22:33,622 INFO: ++ dirname /tmp/tmpw92w30/install.d/99-install-config-templates >2018-06-26 09:22:33,623 INFO: + TEMPLATE_SOURCE=/tmp/tmpw92w30/install.d/../os-apply-config >2018-06-26 09:22:33,623 INFO: + mkdir -p /usr/libexec/os-apply-config/templates >2018-06-26 09:22:33,624 INFO: + '[' -d /tmp/tmpw92w30/install.d/../os-apply-config ']' >2018-06-26 09:22:33,624 INFO: + rsync '--exclude=.*.swp' -Cr /tmp/tmpw92w30/install.d/../os-apply-config/ /usr/libexec/os-apply-config/templates/ >2018-06-26 09:22:33,687 INFO: dib-run-parts 99-install-config-templates completed >2018-06-26 09:22:33,687 INFO: dib-run-parts Running /tmp/tmpw92w30/install.d/99-os-refresh-config-install-scripts >2018-06-26 09:22:33,691 INFO: ++ os-refresh-config --print-base >2018-06-26 09:22:33,750 INFO: + SCRIPT_BASE=/usr/libexec/os-refresh-config >2018-06-26 09:22:33,750 INFO: ++ dirname /tmp/tmpw92w30/install.d/99-os-refresh-config-install-scripts >2018-06-26 09:22:33,751 INFO: + SCRIPT_SOURCE=/tmp/tmpw92w30/install.d/../os-refresh-config >2018-06-26 09:22:33,751 INFO: + rsync -r /tmp/tmpw92w30/install.d/../os-refresh-config/ /usr/libexec/os-refresh-config/ >2018-06-26 09:22:33,757 INFO: dib-run-parts 99-os-refresh-config-install-scripts completed >2018-06-26 09:22:33,757 INFO: dib-run-parts ----------------------- PROFILING ----------------------- >2018-06-26 09:22:33,757 INFO: dib-run-parts >2018-06-26 09:22:33,758 INFO: dib-run-parts Target: install.d >2018-06-26 09:22:33,758 INFO: dib-run-parts >2018-06-26 09:22:33,759 INFO: dib-run-parts Script Seconds >2018-06-26 09:22:33,759 INFO: dib-run-parts --------------------------------------- ---------- >2018-06-26 09:22:33,759 INFO: dib-run-parts >2018-06-26 09:22:33,765 INFO: dib-run-parts 02-puppet-stack-config 0.599 >2018-06-26 09:22:33,770 INFO: dib-run-parts 10-hiera-yaml-symlink 0.005 >2018-06-26 09:22:33,775 INFO: dib-run-parts 10-puppet-stack-config-puppet-module 0.009 >2018-06-26 09:22:33,780 INFO: dib-run-parts 11-create-template-root 0.127 >2018-06-26 09:22:33,784 INFO: dib-run-parts 11-hiera-orc-install 0.030 >2018-06-26 09:22:33,789 INFO: dib-run-parts 75-puppet-modules-package 0.023 >2018-06-26 09:22:33,794 INFO: dib-run-parts 99-install-config-templates 0.170 >2018-06-26 09:22:33,799 INFO: dib-run-parts 99-os-refresh-config-install-scripts 0.068 >2018-06-26 09:22:33,801 INFO: dib-run-parts >2018-06-26 09:22:33,801 INFO: dib-run-parts --------------------- END PROFILING --------------------- >2018-06-26 09:22:33,801 INFO: INFO: 2018-06-26 09:22:33,801 -- ############### End stdout/stderr logging ############### >2018-06-26 09:22:33,801 INFO: INFO: 2018-06-26 09:22:33,801 -- Running hook post-install >2018-06-26 09:22:33,802 INFO: INFO: 2018-06-26 09:22:33,801 -- Skipping hook post-install, the hook directory doesn't exist at /tmp/tmpw92w30/post-install.d >2018-06-26 09:22:33,804 INFO: INFO: 2018-06-26 09:22:33,804 -- Ending run of instack. >2018-06-26 09:22:33,815 INFO: Instack completed successfully >2018-06-26 09:22:33,815 INFO: Running os-refresh-config >2018-06-26 09:22:33,873 INFO: [2018-06-26 09:22:33,873] (os-refresh-config) [INFO] Starting phase configure >2018-06-26 09:22:33,888 INFO: dib-run-parts Tue Jun 26 09:22:33 IST 2018 Running /usr/libexec/os-refresh-config/configure.d/10-hiera-disable >2018-06-26 09:22:33,890 INFO: + '[' -f /etc/puppet/hiera.yaml ']' >2018-06-26 09:22:33,892 INFO: dib-run-parts Tue Jun 26 09:22:33 IST 2018 10-hiera-disable completed >2018-06-26 09:22:33,893 INFO: dib-run-parts Tue Jun 26 09:22:33 IST 2018 Running /usr/libexec/os-refresh-config/configure.d/20-os-apply-config >2018-06-26 09:22:33,990 INFO: [2018/06/26 09:22:33 AM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json >2018-06-26 09:22:35,742 INFO: [2018/06/26 09:22:35 AM] [INFO] writing /etc/os-net-config/config.json >2018-06-26 09:22:35,743 INFO: [2018/06/26 09:22:35 AM] [INFO] writing /root/stackrc >2018-06-26 09:22:35,743 INFO: [2018/06/26 09:22:35 AM] [INFO] writing /var/run/heat-config/heat-config >2018-06-26 09:22:35,744 INFO: [2018/06/26 09:22:35 AM] [INFO] writing /etc/puppet/hiera.yaml >2018-06-26 09:22:35,745 INFO: [2018/06/26 09:22:35 AM] [INFO] writing /var/opt/undercloud-stack/masquerade >2018-06-26 09:22:35,756 INFO: [2018/06/26 09:22:35 AM] [INFO] writing /etc/puppet/hieradata/RedHat.yaml >2018-06-26 09:22:35,757 INFO: [2018/06/26 09:22:35 AM] [INFO] writing /etc/puppet/hieradata/CentOS.yaml >2018-06-26 09:22:35,757 INFO: [2018/06/26 09:22:35 AM] [INFO] writing /root/tripleo-undercloud-passwords >2018-06-26 09:22:35,757 INFO: [2018/06/26 09:22:35 AM] [INFO] writing /etc/os-collect-config.conf >2018-06-26 09:22:35,758 INFO: [2018/06/26 09:22:35 AM] [INFO] success >2018-06-26 09:22:35,767 INFO: dib-run-parts Tue Jun 26 09:22:35 IST 2018 20-os-apply-config completed >2018-06-26 09:22:35,769 INFO: dib-run-parts Tue Jun 26 09:22:35 IST 2018 Running /usr/libexec/os-refresh-config/configure.d/30-reload-keepalived >2018-06-26 09:22:35,771 INFO: + systemctl is-enabled keepalived >2018-06-26 09:22:36,212 INFO: disabled >2018-06-26 09:22:36,215 INFO: dib-run-parts Tue Jun 26 09:22:36 IST 2018 30-reload-keepalived completed >2018-06-26 09:22:36,216 INFO: dib-run-parts Tue Jun 26 09:22:36 IST 2018 Running /usr/libexec/os-refresh-config/configure.d/40-hiera-datafiles >2018-06-26 09:22:36,322 INFO: [2018/06/26 09:22:36 AM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json >2018-06-26 09:22:36,504 INFO: dib-run-parts Tue Jun 26 09:22:36 IST 2018 40-hiera-datafiles completed >2018-06-26 09:22:36,505 INFO: dib-run-parts Tue Jun 26 09:22:36 IST 2018 Running /usr/libexec/os-refresh-config/configure.d/50-puppet-stack-config >2018-06-26 09:22:36,507 INFO: + set -o pipefail >2018-06-26 09:22:36,508 INFO: + puppet_apply puppet apply --summarize --detailed-exitcodes /etc/puppet/manifests/puppet-stack-config.pp >2018-06-26 09:22:36,508 INFO: + set +e >2018-06-26 09:22:36,508 INFO: + puppet apply --summarize --detailed-exitcodes /etc/puppet/manifests/puppet-stack-config.pp >2018-06-26 09:22:46,327 INFO: [mNotice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend[0m >2018-06-26 09:22:46,505 INFO: [1;33mWarning: ModuleLoader: module 'openstacklib' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:22:46,506 INFO: (file & line not available)[0m >2018-06-26 09:22:46,978 INFO: [mNotice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend[0m >2018-06-26 09:22:47,065 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 09:22:47,066 INFO: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 54]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-06-26 09:22:47,066 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:22:47,070 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 09:22:47,070 INFO: with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 55]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-06-26 09:22:47,070 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:22:47,089 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 09:22:47,089 INFO: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 56]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-06-26 09:22:47,089 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:22:47,147 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 09:22:47,147 INFO: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 66]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-06-26 09:22:47,148 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:22:47,151 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 09:22:47,151 INFO: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 68]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-06-26 09:22:47,151 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:22:47,164 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 09:22:47,165 INFO: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 89]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-06-26 09:22:47,165 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:22:47,546 INFO: [1;33mWarning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/rabbitmq/manifests/install/rabbitmqadmin.pp", 37]:["/etc/puppet/modules/rabbitmq/manifests/init.pp", 316] >2018-06-26 09:22:47,546 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:22:47,762 INFO: [mNotice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.[0m >2018-06-26 09:22:48,009 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 09:22:48,009 INFO: with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp", 97]:["/etc/puppet/manifests/puppet-stack-config.pp", 91] >2018-06-26 09:22:48,009 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:22:48,072 INFO: [1;33mWarning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:22:48,073 INFO: (file & line not available)[0m >2018-06-26 09:22:48,360 INFO: [1;33mWarning: ModuleLoader: module 'keystone' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:22:48,360 INFO: (file & line not available)[0m >2018-06-26 09:22:48,834 INFO: [1;33mWarning: ModuleLoader: module 'glance' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:22:48,834 INFO: (file & line not available)[0m >2018-06-26 09:22:49,020 INFO: [1;33mWarning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:22:49,020 INFO: (file & line not available)[0m >2018-06-26 09:22:49,157 INFO: [1;33mWarning: Unknown variable: '::nova::db::mysql_api::setup_cell0'. at /etc/puppet/modules/nova/manifests/db/mysql.pp:53:28[0m >2018-06-26 09:22:49,195 INFO: [1;33mWarning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:22:49,195 INFO: (file & line not available)[0m >2018-06-26 09:22:49,871 INFO: [1;33mWarning: ModuleLoader: module 'heat' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:22:49,871 INFO: (file & line not available)[0m >2018-06-26 09:22:49,931 INFO: [1;33mWarning: ModuleLoader: module 'ironic' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:22:49,931 INFO: (file & line not available)[0m >2018-06-26 09:22:50,076 INFO: [1;33mWarning: ModuleLoader: module 'swift' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:22:50,077 INFO: (file & line not available)[0m >2018-06-26 09:22:50,409 INFO: [1;33mWarning: Scope(Class[Keystone]): keystone::rabbit_host, keystone::rabbit_hosts, keystone::rabbit_password, keystone::rabbit_port, keystone::rabbit_userid and keystone::rabbit_virtual_host are deprecated. Please use keystone::default_transport_url instead.[0m >2018-06-26 09:22:51,857 INFO: [1;33mWarning: Scope(Class[Glance::Notify::Rabbitmq]): glance::notify::rabbitmq::rabbit_host, glance::notify::rabbitmq::rabbit_hosts, glance::notify::rabbitmq::rabbit_password, glance::notify::rabbitmq::rabbit_port, glance::notify::rabbitmq::rabbit_userid and glance::notify::rabbitmq::rabbit_virtual_host are deprecated. Please use glance::notify::rabbitmq::default_transport_url instead.[0m >2018-06-26 09:22:51,929 INFO: [1;33mWarning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release[0m >2018-06-26 09:22:51,929 INFO: [1;33mWarning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release[0m >2018-06-26 09:22:52,224 INFO: [1;33mWarning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:22:52,225 INFO: (file & line not available)[0m >2018-06-26 09:22:52,505 INFO: [1;33mWarning: Unknown variable: 'until_complete_real'. at /etc/puppet/modules/nova/manifests/cron/archive_deleted_rows.pp:77:82[0m >2018-06-26 09:22:52,553 INFO: [1;33mWarning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/nova/manifests/scheduler/filter.pp", 140]:["/etc/puppet/manifests/puppet-stack-config.pp", 389] >2018-06-26 09:22:52,553 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:22:52,758 INFO: [1;33mWarning: Scope(Class[Neutron]): neutron::rabbit_host, neutron::rabbit_hosts, neutron::rabbit_password, neutron::rabbit_port, neutron::rabbit_user, neutron::rabbit_virtual_host and neutron::rpc_backend are deprecated. Please use neutron::default_transport_url instead.[0m >2018-06-26 09:22:53,857 INFO: [1;33mWarning: Unknown variable: 'methods_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:100:56[0m >2018-06-26 09:22:53,857 INFO: [1;33mWarning: Unknown variable: 'incoming_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:101:56[0m >2018-06-26 09:22:53,857 INFO: [1;33mWarning: Unknown variable: 'incoming_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:102:56[0m >2018-06-26 09:22:53,857 INFO: [1;33mWarning: Unknown variable: 'outgoing_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:103:56[0m >2018-06-26 09:22:53,858 INFO: [1;33mWarning: Unknown variable: 'outgoing_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:104:56[0m >2018-06-26 09:22:53,917 INFO: [1;33mWarning: Scope(Class[Swift::Storage::All]): The default port for the object storage server has changed from 6000 to 6200 and will be changed in a later release[0m >2018-06-26 09:22:53,917 INFO: [1;33mWarning: Scope(Class[Swift::Storage::All]): The default port for the container storage server has changed from 6001 to 6201 and will be changed in a later release[0m >2018-06-26 09:22:53,917 INFO: [1;33mWarning: Scope(Class[Swift::Storage::All]): The default port for the account storage server has changed from 6002 to 6202 and will be changed in a later release[0m >2018-06-26 09:22:54,338 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 09:22:54,338 INFO: with Stdlib::Compat::Integer. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/heat/manifests/wsgi/apache_api_cfn.pp", 125]:["/etc/puppet/manifests/puppet-stack-config.pp", 510] >2018-06-26 09:22:54,338 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:22:54,731 INFO: [1;33mWarning: Unknown variable: '::ironic::conductor::swift_account'. at /etc/puppet/modules/ironic/manifests/glance.pp:117:30[0m >2018-06-26 09:22:54,731 INFO: [1;33mWarning: Unknown variable: '::ironic::conductor::swift_temp_url_key'. at /etc/puppet/modules/ironic/manifests/glance.pp:118:35[0m >2018-06-26 09:22:54,732 INFO: [1;33mWarning: Unknown variable: '::ironic::conductor::swift_temp_url_duration'. at /etc/puppet/modules/ironic/manifests/glance.pp:119:40[0m >2018-06-26 09:22:54,749 INFO: [1;33mWarning: Unknown variable: '::ironic::api::neutron_url'. at /etc/puppet/modules/ironic/manifests/neutron.pp:58:29[0m >2018-06-26 09:22:55,661 INFO: [1;33mWarning: ModuleLoader: module 'mistral' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:22:55,661 INFO: (file & line not available)[0m >2018-06-26 09:22:55,886 INFO: [1;33mWarning: Unknown variable: '::mistral::database_idle_timeout'. at /etc/puppet/modules/mistral/manifests/db.pp:57:40[0m >2018-06-26 09:22:55,886 INFO: [1;33mWarning: Unknown variable: '::mistral::database_min_pool_size'. at /etc/puppet/modules/mistral/manifests/db.pp:58:40[0m >2018-06-26 09:22:55,887 INFO: [1;33mWarning: Unknown variable: '::mistral::database_max_pool_size'. at /etc/puppet/modules/mistral/manifests/db.pp:59:40[0m >2018-06-26 09:22:55,888 INFO: [1;33mWarning: Unknown variable: '::mistral::database_max_retries'. at /etc/puppet/modules/mistral/manifests/db.pp:60:40[0m >2018-06-26 09:22:55,888 INFO: [1;33mWarning: Unknown variable: '::mistral::database_retry_interval'. at /etc/puppet/modules/mistral/manifests/db.pp:61:40[0m >2018-06-26 09:22:55,889 INFO: [1;33mWarning: Unknown variable: '::mistral::database_max_overflow'. at /etc/puppet/modules/mistral/manifests/db.pp:62:40[0m >2018-06-26 09:22:55,933 INFO: [1;33mWarning: Scope(Class[Mistral]): mistral::rabbit_host, mistral::rabbit_hosts, mistral::rabbit_password, mistral::rabbit_port, mistral::rabbit_userid, mistral::rabbit_virtual_host and mistral::rpc_backend are deprecated. Please use mistral::default_transport_url instead.[0m >2018-06-26 09:22:56,148 INFO: [1;33mWarning: ModuleLoader: module 'zaqar' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:22:56,148 INFO: (file & line not available)[0m >2018-06-26 09:22:57,370 INFO: [1;33mWarning: ModuleLoader: module 'oslo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:22:57,371 INFO: (file & line not available)[0m >2018-06-26 09:22:57,448 INFO: [1;33mWarning: Scope(Oslo::Messaging::Rabbit[keystone_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead.[0m >2018-06-26 09:22:58,299 INFO: [1;33mWarning: Scope(Oslo::Messaging::Rabbit[glance_api_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead.[0m >2018-06-26 09:22:58,309 INFO: [1;33mWarning: Scope(Oslo::Messaging::Rabbit[glance_registry_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead.[0m >2018-06-26 09:22:58,499 INFO: [1;33mWarning: Scope(Oslo::Messaging::Rabbit[neutron_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead.[0m >2018-06-26 09:22:58,541 INFO: [1;33mWarning: Scope(Neutron::Plugins::Ml2::Type_driver[local]): local type_driver is useful only for single-box, because it provides no connectivity between hosts[0m >2018-06-26 09:22:59,031 INFO: [1;33mWarning: Scope(Oslo::Messaging::Rabbit[mistral_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead.[0m >2018-06-26 09:23:02,184 INFO: [mNotice: Compiled catalog for facebook.local.com in environment production in 16.19 seconds[0m >2018-06-26 09:23:22,683 INFO: [mNotice: /Stage[setup]/Vswitch::Ovs/Package[openvswitch]/ensure: created[0m >2018-06-26 09:23:23,267 INFO: [mNotice: /Stage[setup]/Vswitch::Ovs/Service[openvswitch]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:23:28,665 INFO: [mNotice: /Stage[setup]/Tripleo::Network::Os_net_config/Exec[os-net-config]/returns: executed successfully[0m >2018-06-26 09:23:28,700 INFO: [mNotice: /Stage[setup]/Tripleo::Network::Os_net_config/Exec[trigger-keepalived-restart]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:23:28,907 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'[0m >2018-06-26 09:23:29,104 INFO: [mNotice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}559c25e8bcc4e66a8a99d18bb1059473'[0m >2018-06-26 09:23:29,434 INFO: [mNotice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:23:49,605 INFO: [mNotice: /Stage[main]/Rabbitmq::Install/Package[rabbitmq-server]/ensure: created[0m >2018-06-26 09:23:49,608 INFO: [mNotice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/owner: owner changed 'rabbitmq' to 'root'[0m >2018-06-26 09:23:49,609 INFO: [mNotice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/group: group changed 'rabbitmq' to 'root'[0m >2018-06-26 09:23:49,615 INFO: [mNotice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]/ensure: created[0m >2018-06-26 09:23:49,621 INFO: [mNotice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/content: content changed '{md5}b346ec0a8320f85f795bf612f6b02da7' to '{md5}547b536f67828af26c2c923bf72a882b'[0m >2018-06-26 09:23:49,621 INFO: [mNotice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/owner: owner changed 'rabbitmq' to 'root'[0m >2018-06-26 09:23:49,622 INFO: [mNotice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/mode: mode changed '0644' to '0640'[0m >2018-06-26 09:23:49,627 INFO: [mNotice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/ensure: defined content as '{md5}4f6f330c6a9816346c929ef9372e26e0'[0m >2018-06-26 09:23:49,630 INFO: [mNotice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/ensure: defined content as '{md5}12f8d1a1f9f57f23c1be6c7bf2286e73'[0m >2018-06-26 09:23:49,633 INFO: [mNotice: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]/ensure: defined content as '{md5}44d4ef5cb86ab30e6127e83939ef09c4'[0m >2018-06-26 09:23:49,634 INFO: [mNotice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]/ensure: created[0m >2018-06-26 09:23:49,638 INFO: [mNotice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]/ensure: defined content as '{md5}8eb9ff6c576b9869944215af3a568c2e'[0m >2018-06-26 09:23:49,706 INFO: [mNotice: /Stage[main]/Rabbitmq::Config/Exec[rabbitmq-systemd-reload]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:23:49,710 INFO: [mNotice: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/ensure: defined content as '{md5}5ddc6ba5fcaeddd5b1565e5adfda5236'[0m >2018-06-26 09:23:51,940 INFO: [mNotice: /Stage[main]/Rabbitmq/Rabbitmq_plugin[rabbitmq_management]/ensure: created[0m >2018-06-26 09:23:54,887 INFO: [mNotice: /Stage[main]/Rabbitmq::Service/Service[rabbitmq-server]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:23:54,918 INFO: [mNotice: /Stage[main]/Rabbitmq::Install::Rabbitmqadmin/Archive[rabbitmqadmin]/ensure: download archive from http://192.0.3.1:15672/cli/rabbitmqadmin to /var/lib/rabbitmq/rabbitmqadmin without cleanup[0m >2018-06-26 09:23:55,116 INFO: [mNotice: /Stage[main]/Rabbitmq::Install::Rabbitmqadmin/File[/usr/local/bin/rabbitmqadmin]/ensure: defined content as '{md5}76394723569012aa8a197a08f1b53926'[0m >2018-06-26 09:23:55,723 INFO: [mNotice: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/ensure: ensure changed 'running' to 'stopped'[0m >2018-06-26 09:24:07,266 INFO: [mNotice: /Stage[main]/Firewall::Linux::Redhat/Package[iptables-services]/ensure: created[0m >2018-06-26 09:24:07,346 INFO: [mNotice: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:24:07,765 INFO: [mNotice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:24:08,165 INFO: [mNotice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:24:08,176 INFO: [mNotice: /Stage[main]/Tripleo::Selinux/File[/etc/selinux/config]/content: content changed '{md5}c27a0ccfc58067ab1173b59308cf9ba5' to '{md5}1b476ce188acf89a99c4da6ae6f0e57f'[0m >2018-06-26 09:24:08,177 INFO: [mNotice: /Stage[main]/Tripleo::Selinux/File[/etc/selinux/config]/mode: mode changed '0644' to '0444'[0m >2018-06-26 09:24:08,185 INFO: [mNotice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/ensure: defined content as '{md5}14531b22819f90843a7cb3f90fe6029d'[0m >2018-06-26 09:24:18,836 INFO: [mNotice: /Stage[main]/Mysql::Server::Installdb/Mysql_datadir[/var/lib/mysql]/ensure: created[0m >2018-06-26 09:24:18,847 INFO: [mNotice: /Stage[main]/Main/File[/etc/systemd/system/mariadb.service.d]/ensure: created[0m >2018-06-26 09:24:18,852 INFO: [mNotice: /Stage[main]/Main/File[/etc/systemd/system/mariadb.service.d/limits.conf]/ensure: defined content as '{md5}8eb9ff6c576b9869944215af3a568c2e'[0m >2018-06-26 09:24:18,928 INFO: [mNotice: /Stage[main]/Main/Exec[systemctl-daemon-reload]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:24:19,739 INFO: [mNotice: /Stage[main]/Mysql::Server::Service/Service[mysqld]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:24:19,756 INFO: [mNotice: /Stage[main]/Mysql::Server::Service/Exec[wait_for_mysql_socket_to_open]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:24:19,839 INFO: [mNotice: /Stage[main]/Mysql::Server::Root_password/Mysql_user[root@localhost]/password_hash: defined 'password_hash' as '*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19'[0m >2018-06-26 09:24:19,845 INFO: [mNotice: /Stage[main]/Mysql::Server::Root_password/File[/root/.my.cnf]/ensure: defined content as '{md5}9910b6aa16e7a0522d2505e472b8cb0c'[0m >2018-06-26 09:24:19,859 INFO: [mNotice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@127.0.0.1]/ensure: removed[0m >2018-06-26 09:24:19,869 INFO: [mNotice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@::1]/ensure: removed[0m >2018-06-26 09:24:19,879 INFO: [mNotice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost]/ensure: removed[0m >2018-06-26 09:24:19,890 INFO: [mNotice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@facebook.local.com]/ensure: removed[0m >2018-06-26 09:24:19,901 INFO: [mNotice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@facebook.local.com]/ensure: removed[0m >2018-06-26 09:24:19,961 INFO: [mNotice: /Stage[main]/Mysql::Server::Account_security/Mysql_database[test]/ensure: removed[0m >2018-06-26 09:24:19,965 INFO: [mNotice: /Stage[main]/Main/File[/var/log/journal]/ensure: created[0m >2018-06-26 09:24:20,035 INFO: [mNotice: /Stage[main]/Main/Service[systemd-journald]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:24:31,650 INFO: [mNotice: /Stage[main]/Swift/Package[swift]/ensure: created[0m >2018-06-26 09:24:51,810 INFO: [mNotice: /Stage[main]/Keystone/Package[keystone]/ensure: created[0m >2018-06-26 09:25:08,282 INFO: [mNotice: /Stage[main]/Glance/Package[openstack-glance]/ensure: created[0m >2018-06-26 09:25:08,284 INFO: [mNotice: /Stage[main]/Glance::Deps/Anchor[glance::install::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:25:08,311 INFO: [mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_host]/ensure: created[0m >2018-06-26 09:25:08,331 INFO: [mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_port]/ensure: created[0m >2018-06-26 09:25:08,358 INFO: [mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/workers]/ensure: created[0m >2018-06-26 09:25:08,417 INFO: [mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_cache_dir]/ensure: created[0m >2018-06-26 09:25:08,457 INFO: [mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v1_api]/ensure: created[0m >2018-06-26 09:25:08,472 INFO: [mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v2_api]/ensure: created[0m >2018-06-26 09:25:08,504 INFO: [mNotice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/os_region_name]/ensure: created[0m >2018-06-26 09:25:08,744 INFO: [mNotice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/stores]/ensure: created[0m >2018-06-26 09:25:08,763 INFO: [mNotice: /Stage[main]/Glance::Api/Glance_cache_config[glance_store/os_region_name]/ensure: created[0m >2018-06-26 09:25:08,780 INFO: [mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/registry_host]/ensure: created[0m >2018-06-26 09:25:08,804 INFO: [mNotice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/registry_host]/ensure: created[0m >2018-06-26 09:25:08,823 INFO: [mNotice: /Stage[main]/Glance::Api/Glance_api_config[paste_deploy/flavor]/ensure: created[0m >2018-06-26 09:25:08,902 INFO: [mNotice: /Stage[main]/Glance::Backend::Swift/Glance_api_config[glance_store/swift_store_create_container_on_put]/ensure: created[0m >2018-06-26 09:25:08,934 INFO: [mNotice: /Stage[main]/Glance::Backend::Swift/Glance_api_config[glance_store/swift_store_endpoint_type]/ensure: created[0m >2018-06-26 09:25:08,953 INFO: [mNotice: /Stage[main]/Glance::Backend::Swift/Glance_api_config[glance_store/swift_store_config_file]/ensure: created[0m >2018-06-26 09:25:08,974 INFO: [mNotice: /Stage[main]/Glance::Backend::Swift/Glance_api_config[glance_store/default_swift_reference]/ensure: created[0m >2018-06-26 09:25:09,232 INFO: [mNotice: /Stage[main]/Glance::Backend::Swift/Glance_api_config[glance_store/default_store]/ensure: created[0m >2018-06-26 09:25:09,233 INFO: [mNotice: /Stage[main]/Glance::Backend::Swift/Glance_swift_config[ref1/user]/ensure: created[0m >2018-06-26 09:25:09,234 INFO: [mNotice: /Stage[main]/Glance::Backend::Swift/Glance_swift_config[ref1/key]/ensure: created[0m >2018-06-26 09:25:09,234 INFO: [mNotice: /Stage[main]/Glance::Backend::Swift/Glance_swift_config[ref1/auth_address]/ensure: created[0m >2018-06-26 09:25:09,235 INFO: [mNotice: /Stage[main]/Glance::Backend::Swift/Glance_swift_config[ref1/auth_version]/ensure: created[0m >2018-06-26 09:25:09,236 INFO: [mNotice: /Stage[main]/Glance::Backend::Swift/Glance_swift_config[ref1/user_domain_id]/ensure: created[0m >2018-06-26 09:25:09,237 INFO: [mNotice: /Stage[main]/Glance::Backend::Swift/Glance_swift_config[ref1/project_domain_id]/ensure: created[0m >2018-06-26 09:25:25,160 INFO: [mNotice: /Stage[main]/Nova/Package[python-nova]/ensure: created[0m >2018-06-26 09:25:35,591 INFO: [mNotice: /Stage[main]/Nova/Package[nova-common]/ensure: created[0m >2018-06-26 09:25:58,310 INFO: [1;31mError: Execution of '/bin/yum -d 0 -e 0 -y install openstack-neutron' returned 1: Error downloading packages: >2018-06-26 09:25:58,310 INFO: dnsmasq-utils-2.76-2.el7_4.2.x86_64: [Errno 256] No more mirrors to try. >2018-06-26 09:25:58,311 INFO: python-ryu-4.15-1.el7ost.noarch: [Errno 256] No more mirrors to try. >2018-06-26 09:25:58,311 INFO: python-ryu-common-4.15-1.el7ost.noarch: [Errno 256] No more mirrors to try. >2018-06-26 09:25:58,311 INFO: python-singledispatch-3.4.0.3-2.1.el7ost.noarch: [Errno 256] No more mirrors to try.[0m >2018-06-26 09:25:58,311 INFO: [1;31mError: /Stage[main]/Neutron/Package[neutron]/ensure: change from purged to present failed: Execution of '/bin/yum -d 0 -e 0 -y install openstack-neutron' returned 1: Error downloading packages: >2018-06-26 09:25:58,311 INFO: dnsmasq-utils-2.76-2.el7_4.2.x86_64: [Errno 256] No more mirrors to try. >2018-06-26 09:25:58,311 INFO: python-ryu-4.15-1.el7ost.noarch: [Errno 256] No more mirrors to try. >2018-06-26 09:25:58,312 INFO: python-ryu-common-4.15-1.el7ost.noarch: [Errno 256] No more mirrors to try. >2018-06-26 09:25:58,312 INFO: python-singledispatch-3.4.0.3-2.1.el7ost.noarch: [Errno 256] No more mirrors to try.[0m >2018-06-26 09:26:13,869 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Package[neutron-plugin-ml2]/ensure: created[0m >2018-06-26 09:26:23,229 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2::Networking_baremetal/Package[python2-networking-baremetal]/ensure: created[0m >2018-06-26 09:26:32,346 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Package[python2-ironic-neutron-agent]/ensure: created[0m >2018-06-26 09:26:41,755 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Package[neutron-ovs-agent]/ensure: created[0m >2018-06-26 09:26:41,756 INFO: [mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,756 INFO: [1;33mWarning: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,756 INFO: [mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::config::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,756 INFO: [1;33mWarning: /Stage[main]/Neutron::Deps/Anchor[neutron::config::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,760 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_host]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,760 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_host]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,760 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_port]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,760 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_port]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,761 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,761 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,761 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,761 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,761 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/base_mac]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,761 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[DEFAULT/base_mac]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,761 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_lease_duration]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,762 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_lease_duration]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,762 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,762 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,762 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,762 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,762 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,762 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,763 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,763 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,763 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_bulk]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,763 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_bulk]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,763 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,763 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,763 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/api_extensions_path]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,764 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[DEFAULT/api_extensions_path]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,764 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/state_path]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,764 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[DEFAULT/state_path]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,764 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,764 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,764 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/max_allowed_address_pair]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,765 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[DEFAULT/max_allowed_address_pair]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,765 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,765 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[agent/root_helper]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,765 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[agent/root_helper_daemon]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,765 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[agent/root_helper_daemon]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,765 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,765 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[agent/report_interval]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,765 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,766 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,766 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/use_ssl]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,766 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[DEFAULT/use_ssl]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,766 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[ssl/cert_file]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,766 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[ssl/cert_file]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,766 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[ssl/key_file]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,766 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[ssl/key_file]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,767 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[ssl/ca_file]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,767 INFO: [1;33mWarning: /Stage[main]/Neutron/Neutron_config[ssl/ca_file]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,767 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,767 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,767 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/max_l3_agents_per_router]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,767 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/max_l3_agents_per_router]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,767 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha_net_cidr]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,768 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha_net_cidr]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,768 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/api_workers]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,768 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/api_workers]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,768 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/rpc_workers]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,768 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/rpc_workers]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,768 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/agent_down_time]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,769 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/agent_down_time]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,769 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/enable_new_agents]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,769 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/enable_new_agents]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,769 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_scheduler_driver]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,769 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_scheduler_driver]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,769 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_distributed]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,769 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_distributed]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,770 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/enable_dvr]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,770 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/enable_dvr]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,770 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/allow_automatic_l3agent_failover]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,770 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/allow_automatic_l3agent_failover]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,770 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/allow_automatic_dhcp_failover]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,770 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/allow_automatic_dhcp_failover]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,770 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/network_scheduler_driver]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,771 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/network_scheduler_driver]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,771 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/dhcp_load_type]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,771 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/dhcp_load_type]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,771 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/default_availability_zones]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,771 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/default_availability_zones]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,771 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/network_auto_schedule]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,772 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/network_auto_schedule]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,772 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/ovs_integration_bridge]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,772 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/ovs_integration_bridge]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,772 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[service_providers/service_provider]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,772 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[service_providers/service_provider]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,772 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[qos/notification_drivers]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,772 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_config[qos/notification_drivers]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,773 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/admin_tenant_name]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,773 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/admin_tenant_name]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,773 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/admin_user]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,773 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/admin_user]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,773 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/admin_password]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,773 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/admin_password]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,773 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/identity_uri]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,774 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/identity_uri]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,774 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_url]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,774 INFO: [1;33mWarning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_url]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,774 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/username]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,774 INFO: [1;33mWarning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/username]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,774 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/password]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,775 INFO: [1;33mWarning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/password]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,775 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_id]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,775 INFO: [1;33mWarning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_id]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,775 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_name]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,775 INFO: [1;33mWarning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_name]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,775 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_name]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,775 INFO: [1;33mWarning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_name]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,776 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_id]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,776 INFO: [1;33mWarning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_id]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,776 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_name]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,776 INFO: [1;33mWarning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_name]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,776 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/region_name]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,776 INFO: [1;33mWarning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/region_name]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,776 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/endpoint_type]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,777 INFO: [1;33mWarning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/endpoint_type]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,777 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_type]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,777 INFO: [1;33mWarning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_type]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,777 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/tenant_name]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,777 INFO: [1;33mWarning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/tenant_name]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,777 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_status_changes]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,777 INFO: [1;33mWarning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_status_changes]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,778 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_data_changes]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,778 INFO: [1;33mWarning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_data_changes]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,778 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/send_events_interval]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,778 INFO: [1;33mWarning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/send_events_interval]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,778 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/default_quota]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,778 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/default_quota]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,779 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_network]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,779 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_network]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,779 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_subnet]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,779 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_subnet]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,779 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_port]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,779 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_port]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,779 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_router]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,779 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_router]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,780 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_floatingip]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,780 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_floatingip]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,780 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_security_group]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,780 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_security_group]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,780 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_security_group_rule]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,780 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_security_group_rule]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,780 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_driver]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,781 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_driver]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,781 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_firewall]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,781 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_firewall]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,781 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_firewall_policy]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,781 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_firewall_policy]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,781 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_firewall_rule]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,781 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_firewall_rule]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,782 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_healthmonitor]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,782 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_healthmonitor]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,782 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_member]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,782 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_member]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,782 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_network_gateway]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,782 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_network_gateway]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,782 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_packet_filter]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,783 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_packet_filter]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,783 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_loadbalancer]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,783 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_loadbalancer]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,783 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_pool]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,783 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_pool]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,783 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_vip]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,783 INFO: [1;33mWarning: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_vip]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,784 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,784 INFO: [1;33mWarning: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,784 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,784 INFO: [1;33mWarning: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,784 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,784 INFO: [1;33mWarning: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,784 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,785 INFO: [1;33mWarning: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,785 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,785 INFO: [1;33mWarning: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,785 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,785 INFO: [1;33mWarning: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,785 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,785 INFO: [1;33mWarning: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,786 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,786 INFO: [1;33mWarning: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,786 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/enable_security_group]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,786 INFO: [1;33mWarning: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/enable_security_group]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,786 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,786 INFO: [1;33mWarning: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,787 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/physical_network_mtus]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,787 INFO: [1;33mWarning: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/physical_network_mtus]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,787 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/auth_strategy]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,787 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/auth_strategy]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,787 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/ironic_url]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,787 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/ironic_url]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,787 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/cafile]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,788 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/cafile]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,788 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/certfile]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,788 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/certfile]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,788 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/keyfile]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,788 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/keyfile]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,788 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/insecure]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,789 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/insecure]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,789 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/auth_type]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,789 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/auth_type]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,789 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/auth_url]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,789 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/auth_url]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,789 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/username]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,790 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/username]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,790 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/password]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,790 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/password]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,790 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/project_domain_id]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,790 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/project_domain_id]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,790 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/project_domain_name]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,790 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/project_domain_name]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,791 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/project_name]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,791 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/project_name]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,791 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/user_domain_id]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,791 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/user_domain_id]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,791 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/user_domain_name]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,791 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/user_domain_name]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,792 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/region_name]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,792 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/region_name]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,792 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/retry_interval]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,792 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/retry_interval]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,792 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/max_retries]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,792 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/max_retries]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,793 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_isolated_metadata]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,793 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_isolated_metadata]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,793 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/force_metadata]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,793 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/force_metadata]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,793 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_metadata_network]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,793 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_metadata_network]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,793 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/debug]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,794 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/debug]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,794 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/state_path]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,794 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/state_path]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,794 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/resync_interval]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,794 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/resync_interval]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,794 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/interface_driver]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,795 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/interface_driver]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,795 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dhcp_domain]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,795 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dhcp_domain]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,795 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dhcp_driver]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,795 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dhcp_driver]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,795 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/root_helper]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,795 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/root_helper]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,796 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dhcp_broadcast_reply]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,796 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dhcp_broadcast_reply]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,796 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_config_file]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,796 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_config_file]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,796 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_dns_servers]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,796 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_dns_servers]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,796 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_local_resolv]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,797 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_local_resolv]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,797 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/ovs_integration_bridge]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,797 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/ovs_integration_bridge]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,797 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[AGENT/availability_zone]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,797 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[AGENT/availability_zone]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,797 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[OVS/ovsdb_connection]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,798 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[OVS/ovsdb_connection]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,798 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[OVS/ssl_key_file]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,798 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[OVS/ssl_key_file]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,798 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[OVS/ssl_cert_file]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,798 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[OVS/ssl_cert_file]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,798 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[OVS/ssl_ca_cert_file]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,798 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[OVS/ssl_ca_cert_file]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,799 INFO: [mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/debug]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,799 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/debug]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,799 INFO: [mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/interface_driver]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,799 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/interface_driver]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,799 INFO: [mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/gateway_external_network_id]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,799 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/gateway_external_network_id]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,799 INFO: [mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/handle_internal_only_routers]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,800 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/handle_internal_only_routers]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,800 INFO: [mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/metadata_port]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,800 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/metadata_port]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,800 INFO: [mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/send_arp_for_ha]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,800 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/send_arp_for_ha]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,800 INFO: [mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/periodic_interval]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,801 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/periodic_interval]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,801 INFO: [mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/periodic_fuzzy_delay]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,801 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/periodic_fuzzy_delay]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,801 INFO: [mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/enable_metadata_proxy]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,801 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/enable_metadata_proxy]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,801 INFO: [mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/agent_mode]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,801 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/agent_mode]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,802 INFO: [mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[AGENT/availability_zone]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,802 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[AGENT/availability_zone]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,802 INFO: [mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[AGENT/extensions]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,802 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[AGENT/extensions]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,802 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,802 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,803 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/polling_interval]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,803 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/polling_interval]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,803 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,803 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,803 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,803 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,804 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,804 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,804 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,804 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,804 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,804 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,805 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/minimize_polling]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,805 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/minimize_polling]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,805 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,805 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,805 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/datapath_type]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,805 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/datapath_type]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,806 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/vhostuser_socket_dir]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,806 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/vhostuser_socket_dir]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,821 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/ovsdb_interface]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,821 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/ovsdb_interface]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,821 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/of_interface]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,821 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/of_interface]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,821 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/enable_security_group]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,822 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/enable_security_group]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,822 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,822 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,822 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,822 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,822 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,823 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,823 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/int_peer_patch_port]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,823 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/int_peer_patch_port]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,823 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tun_peer_patch_port]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,823 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tun_peer_patch_port]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,823 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[ovs-cleanup-service]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,823 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[ovs-cleanup-service]: Skipping because of failed dependencies[0m >2018-06-26 09:26:41,824 INFO: [mNotice: /Stage[main]/Main/Neutron_config[DEFAULT/notification_driver]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:26:41,824 INFO: [1;33mWarning: /Stage[main]/Main/Neutron_config[DEFAULT/notification_driver]: Skipping because of failed dependencies[0m >2018-06-26 09:26:53,196 INFO: [mNotice: /Stage[main]/Memcached/Package[memcached]/ensure: created[0m >2018-06-26 09:26:53,204 INFO: [mNotice: /Stage[main]/Memcached/File[/etc/sysconfig/memcached]/content: content changed '{md5}a50ed62e82d31fb4cb2de2226650c545' to '{md5}5c564a6f7d5dc1b600b435e716c794fc'[0m >2018-06-26 09:26:53,697 INFO: [mNotice: /Stage[main]/Memcached/Service[memcached]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:27:03,143 INFO: [mNotice: /Stage[main]/Swift::Proxy/Package[swift-proxy]/ensure: created[0m >2018-06-26 09:27:03,152 INFO: [mNotice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[pipeline:main/pipeline]/value: value changed 'catch_errors cache proxy-server' to 'catch_errors proxy-server'[0m >2018-06-26 09:27:03,153 INFO: [mNotice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/auto_create_account_prefix]/ensure: created[0m >2018-06-26 09:27:03,155 INFO: [mNotice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/concurrency]/ensure: created[0m >2018-06-26 09:27:03,156 INFO: [mNotice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/expiring_objects_account_name]/ensure: created[0m >2018-06-26 09:27:03,157 INFO: [mNotice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/interval]/ensure: created[0m >2018-06-26 09:27:03,159 INFO: [mNotice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/process]/ensure: created[0m >2018-06-26 09:27:03,160 INFO: [mNotice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/processes]/ensure: created[0m >2018-06-26 09:27:03,162 INFO: [mNotice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/reclaim_age]/ensure: created[0m >2018-06-26 09:27:03,163 INFO: [mNotice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/recon_cache_path]/ensure: created[0m >2018-06-26 09:27:03,165 INFO: [mNotice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/report_interval]/ensure: created[0m >2018-06-26 09:27:11,394 INFO: [mNotice: /Stage[main]/Xinetd/Package[xinetd]/ensure: created[0m >2018-06-26 09:27:11,405 INFO: [mNotice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/content: content changed '{md5}9ff8cc688dd9f0dfc45e5afd25c427a7' to '{md5}7d37008224e71625019cb48768f267e7'[0m >2018-06-26 09:27:11,405 INFO: [mNotice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/mode: mode changed '0600' to '0644'[0m >2018-06-26 09:27:25,942 INFO: [mNotice: /Stage[main]/Heat/Package[heat-common]/ensure: created[0m >2018-06-26 09:27:34,528 INFO: [mNotice: /Stage[main]/Heat::Api/Package[heat-api]/ensure: created[0m >2018-06-26 09:27:43,623 INFO: [mNotice: /Stage[main]/Heat::Api_cfn/Package[heat-api-cfn]/ensure: created[0m >2018-06-26 09:27:53,174 INFO: [mNotice: /Stage[main]/Heat::Engine/Package[heat-engine]/ensure: created[0m >2018-06-26 09:27:53,176 INFO: [mNotice: /Stage[main]/Heat::Deps/Anchor[heat::install::end]: Triggered 'refresh' from 4 events[0m >2018-06-26 09:27:53,190 INFO: [mNotice: /Stage[main]/Heat/Heat_config[trustee/auth_type]/ensure: created[0m >2018-06-26 09:27:53,201 INFO: [mNotice: /Stage[main]/Heat/Heat_config[trustee/auth_url]/ensure: created[0m >2018-06-26 09:27:53,212 INFO: [mNotice: /Stage[main]/Heat/Heat_config[trustee/username]/ensure: created[0m >2018-06-26 09:27:53,222 INFO: [mNotice: /Stage[main]/Heat/Heat_config[trustee/password]/ensure: created[0m >2018-06-26 09:27:53,232 INFO: [mNotice: /Stage[main]/Heat/Heat_config[trustee/project_domain_name]/ensure: created[0m >2018-06-26 09:27:53,243 INFO: [mNotice: /Stage[main]/Heat/Heat_config[trustee/user_domain_name]/ensure: created[0m >2018-06-26 09:27:53,253 INFO: [mNotice: /Stage[main]/Heat/Heat_config[clients_keystone/auth_uri]/ensure: created[0m >2018-06-26 09:27:53,268 INFO: [mNotice: /Stage[main]/Heat/Heat_config[clients/endpoint_type]/ensure: created[0m >2018-06-26 09:27:53,290 INFO: [mNotice: /Stage[main]/Heat/Heat_config[DEFAULT/max_json_body_size]/ensure: created[0m >2018-06-26 09:27:53,316 INFO: [mNotice: /Stage[main]/Heat/Heat_config[ec2authtoken/auth_uri]/ensure: created[0m >2018-06-26 09:27:53,331 INFO: [mNotice: /Stage[main]/Heat/Heat_config[yaql/limit_iterators]/ensure: created[0m >2018-06-26 09:27:53,341 INFO: [mNotice: /Stage[main]/Heat/Heat_config[yaql/memory_quota]/ensure: created[0m >2018-06-26 09:27:53,351 INFO: [mNotice: /Stage[main]/Heat::Api/Heat_config[heat_api/bind_host]/ensure: created[0m >2018-06-26 09:27:53,593 INFO: [mNotice: /Stage[main]/Heat::Api/Heat_config[heat_api/workers]/ensure: created[0m >2018-06-26 09:27:53,603 INFO: [mNotice: /Stage[main]/Heat::Api_cfn/Heat_config[heat_api_cfn/bind_host]/ensure: created[0m >2018-06-26 09:27:53,617 INFO: [mNotice: /Stage[main]/Heat::Api_cfn/Heat_config[heat_api_cfn/workers]/ensure: created[0m >2018-06-26 09:27:53,627 INFO: [mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/auth_encryption_key]/ensure: created[0m >2018-06-26 09:27:53,637 INFO: [mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_stack_user_role]/ensure: created[0m >2018-06-26 09:27:53,646 INFO: [mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_metadata_server_url]/ensure: created[0m >2018-06-26 09:27:53,656 INFO: [mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_waitcondition_server_url]/ensure: created[0m >2018-06-26 09:27:53,665 INFO: [mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_watch_server_url]/ensure: created[0m >2018-06-26 09:27:53,709 INFO: [mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_resources_per_stack]/ensure: created[0m >2018-06-26 09:27:53,731 INFO: [mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/num_engine_workers]/ensure: created[0m >2018-06-26 09:27:53,742 INFO: [mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/convergence_engine]/ensure: created[0m >2018-06-26 09:27:53,752 INFO: [mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/reauthentication_auth_method]/ensure: created[0m >2018-06-26 09:27:53,774 INFO: [mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_nested_stack_depth]/ensure: created[0m >2018-06-26 09:27:53,791 INFO: [mNotice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin]/ensure: created[0m >2018-06-26 09:27:53,802 INFO: [mNotice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin_password]/ensure: created[0m >2018-06-26 09:27:53,814 INFO: [mNotice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_user_domain_name]/ensure: created[0m >2018-06-26 09:27:53,870 INFO: [mNotice: /Stage[main]/Heat::Cron::Purge_deleted/Cron[heat-manage purge_deleted]/ensure: created[0m >2018-06-26 09:28:15,279 INFO: [mNotice: /Stage[main]/Ironic/Package[ironic-common]/ensure: created[0m >2018-06-26 09:28:15,285 INFO: [mNotice: /Stage[main]/Main/File[dnsmasq-ironic.conf]/ensure: defined content as '{md5}1c23f6b2b9a0910c3e32f02970493f00'[0m >2018-06-26 09:28:24,544 INFO: [mNotice: /Stage[main]/Ironic::Api/Package[ironic-api]/ensure: created[0m >2018-06-26 09:28:33,390 INFO: [mNotice: /Stage[main]/Ironic::Conductor/Package[ironic-conductor]/ensure: created[0m >2018-06-26 09:28:43,083 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Staging/Package[ironic-staging-drivers]/ensure: created[0m >2018-06-26 09:28:56,358 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Package[ironic-inspector]/ensure: created[0m >2018-06-26 09:29:06,254 INFO: [mNotice: /Stage[main]/Ironic::Pxe/Package[tftp-server]/ensure: created[0m >2018-06-26 09:29:15,815 INFO: [mNotice: /Stage[main]/Ironic::Pxe/Package[ipxe]/ensure: created[0m >2018-06-26 09:29:15,816 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::install::end]: Triggered 'refresh' from 3 events[0m >2018-06-26 09:29:15,822 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic-inspector::install::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:29:15,843 INFO: [mNotice: /Stage[main]/Ironic::Glance/Ironic_config[glance/auth_type]/ensure: created[0m >2018-06-26 09:29:15,860 INFO: [mNotice: /Stage[main]/Ironic::Glance/Ironic_config[glance/username]/ensure: created[0m >2018-06-26 09:29:15,875 INFO: [mNotice: /Stage[main]/Ironic::Glance/Ironic_config[glance/password]/ensure: created[0m >2018-06-26 09:29:15,890 INFO: [mNotice: /Stage[main]/Ironic::Glance/Ironic_config[glance/auth_url]/ensure: created[0m >2018-06-26 09:29:15,906 INFO: [mNotice: /Stage[main]/Ironic::Glance/Ironic_config[glance/project_name]/ensure: created[0m >2018-06-26 09:29:15,921 INFO: [mNotice: /Stage[main]/Ironic::Glance/Ironic_config[glance/user_domain_name]/ensure: created[0m >2018-06-26 09:29:15,935 INFO: [mNotice: /Stage[main]/Ironic::Glance/Ironic_config[glance/project_domain_name]/ensure: created[0m >2018-06-26 09:29:16,019 INFO: [mNotice: /Stage[main]/Ironic::Neutron/Ironic_config[neutron/auth_type]/ensure: created[0m >2018-06-26 09:29:16,033 INFO: [mNotice: /Stage[main]/Ironic::Neutron/Ironic_config[neutron/username]/ensure: created[0m >2018-06-26 09:29:16,048 INFO: [mNotice: /Stage[main]/Ironic::Neutron/Ironic_config[neutron/password]/ensure: created[0m >2018-06-26 09:29:16,062 INFO: [mNotice: /Stage[main]/Ironic::Neutron/Ironic_config[neutron/auth_url]/ensure: created[0m >2018-06-26 09:29:16,274 INFO: [mNotice: /Stage[main]/Ironic::Neutron/Ironic_config[neutron/project_name]/ensure: created[0m >2018-06-26 09:29:16,289 INFO: [mNotice: /Stage[main]/Ironic::Neutron/Ironic_config[neutron/user_domain_name]/ensure: created[0m >2018-06-26 09:29:16,303 INFO: [mNotice: /Stage[main]/Ironic::Neutron/Ironic_config[neutron/project_domain_name]/ensure: created[0m >2018-06-26 09:29:16,318 INFO: [mNotice: /Stage[main]/Ironic/Ironic_config[DEFAULT/auth_strategy]/ensure: created[0m >2018-06-26 09:29:16,332 INFO: [mNotice: /Stage[main]/Ironic/Ironic_config[DEFAULT/my_ip]/ensure: created[0m >2018-06-26 09:29:16,346 INFO: [mNotice: /Stage[main]/Ironic/Ironic_config[DEFAULT/default_resource_class]/ensure: created[0m >2018-06-26 09:29:16,349 INFO: [mNotice: /Stage[main]/Ironic::Db::Sync/File[/var/log/ironic/ironic-dbsync.log]/ensure: created[0m >2018-06-26 09:29:16,363 INFO: [mNotice: /Stage[main]/Ironic::Api/Ironic_config[api/host_ip]/ensure: created[0m >2018-06-26 09:29:16,377 INFO: [mNotice: /Stage[main]/Ironic::Api/Ironic_config[api/port]/ensure: created[0m >2018-06-26 09:29:16,391 INFO: [mNotice: /Stage[main]/Ironic::Api/Ironic_config[api/max_limit]/ensure: created[0m >2018-06-26 09:29:16,405 INFO: [mNotice: /Stage[main]/Ironic::Api/Ironic_config[api/api_workers]/ensure: created[0m >2018-06-26 09:29:16,448 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Agent/Ironic_config[agent/deploy_logs_collect]/ensure: created[0m >2018-06-26 09:29:16,463 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Agent/Ironic_config[agent/deploy_logs_storage_backend]/ensure: created[0m >2018-06-26 09:29:16,477 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Agent/Ironic_config[agent/deploy_logs_local_path]/ensure: created[0m >2018-06-26 09:29:16,507 INFO: [mNotice: /Stage[main]/Ironic::Conductor/Ironic_config[DEFAULT/enabled_drivers]/ensure: created[0m >2018-06-26 09:29:16,521 INFO: [mNotice: /Stage[main]/Ironic::Conductor/Ironic_config[DEFAULT/enabled_hardware_types]/ensure: created[0m >2018-06-26 09:29:16,535 INFO: [mNotice: /Stage[main]/Ironic::Conductor/Ironic_config[conductor/max_time_interval]/ensure: created[0m >2018-06-26 09:29:16,550 INFO: [mNotice: /Stage[main]/Ironic::Conductor/Ironic_config[conductor/force_power_state_during_sync]/ensure: created[0m >2018-06-26 09:29:16,744 INFO: [mNotice: /Stage[main]/Ironic::Conductor/Ironic_config[conductor/automated_clean]/ensure: created[0m >2018-06-26 09:29:16,759 INFO: [mNotice: /Stage[main]/Ironic::Conductor/Ironic_config[conductor/api_url]/ensure: created[0m >2018-06-26 09:29:16,773 INFO: [mNotice: /Stage[main]/Ironic::Conductor/Ironic_config[deploy/http_url]/ensure: created[0m >2018-06-26 09:29:16,794 INFO: [mNotice: /Stage[main]/Ironic::Conductor/Ironic_config[deploy/erase_devices_priority]/ensure: created[0m >2018-06-26 09:29:16,808 INFO: [mNotice: /Stage[main]/Ironic::Conductor/Ironic_config[deploy/erase_devices_metadata_priority]/ensure: created[0m >2018-06-26 09:29:16,852 INFO: [mNotice: /Stage[main]/Ironic::Conductor/Ironic_config[deploy/default_boot_option]/ensure: created[0m >2018-06-26 09:29:16,881 INFO: [mNotice: /Stage[main]/Ironic::Conductor/Ironic_config[neutron/cleaning_network]/ensure: created[0m >2018-06-26 09:29:16,895 INFO: [mNotice: /Stage[main]/Ironic::Conductor/Ironic_config[neutron/provisioning_network]/ensure: created[0m >2018-06-26 09:29:17,264 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Ilo/Ironic_config[ilo/default_boot_mode]/ensure: created[0m >2018-06-26 09:29:17,279 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Inspector/Ironic_config[inspector/enabled]/ensure: created[0m >2018-06-26 09:29:17,300 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Inspector/Ironic_config[inspector/auth_type]/ensure: created[0m >2018-06-26 09:29:17,314 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Inspector/Ironic_config[inspector/username]/ensure: created[0m >2018-06-26 09:29:17,329 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Inspector/Ironic_config[inspector/password]/ensure: created[0m >2018-06-26 09:29:17,342 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Inspector/Ironic_config[inspector/auth_url]/ensure: created[0m >2018-06-26 09:29:17,357 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Inspector/Ironic_config[inspector/project_name]/ensure: created[0m >2018-06-26 09:29:17,371 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Inspector/Ironic_config[inspector/user_domain_name]/ensure: created[0m >2018-06-26 09:29:17,385 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Inspector/Ironic_config[inspector/project_domain_name]/ensure: created[0m >2018-06-26 09:29:17,407 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Pxe/Ironic_config[pxe/ipxe_enabled]/ensure: created[0m >2018-06-26 09:29:17,429 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Pxe/Ironic_config[pxe/pxe_bootfile_name]/ensure: created[0m >2018-06-26 09:29:17,443 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Pxe/Ironic_config[pxe/pxe_config_template]/ensure: created[0m >2018-06-26 09:29:17,465 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Pxe/Ironic_config[pxe/tftp_root]/ensure: created[0m >2018-06-26 09:29:17,487 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Pxe/Ironic_config[pxe/tftp_master_path]/ensure: created[0m >2018-06-26 09:29:17,510 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Pxe/Ironic_config[pxe/uefi_pxe_bootfile_name]/ensure: created[0m >2018-06-26 09:29:17,677 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Pxe/Ironic_config[pxe/uefi_pxe_config_template]/ensure: created[0m >2018-06-26 09:29:17,693 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Pxe/Ironic_config[pxe/ipxe_timeout]/ensure: created[0m >2018-06-26 09:29:17,710 INFO: [mNotice: /Stage[main]/Ironic::Inspector/File[/etc/ironic-inspector/inspector.conf]/owner: owner changed 'root' to 'ironic-inspector'[0m >2018-06-26 09:29:17,715 INFO: [mNotice: /Stage[main]/Ironic::Inspector/File[/etc/ironic-inspector/dnsmasq.conf]/content: content changed '{md5}9eabe6f969928fde6524d0dd00781479' to '{md5}1aa5d8d4ff7a17016e4f4afa2ac0f621'[0m >2018-06-26 09:29:17,721 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[DEFAULT/listen_address]/ensure: created[0m >2018-06-26 09:29:17,725 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[DEFAULT/auth_strategy]/ensure: created[0m >2018-06-26 09:29:17,732 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[capabilities/boot_mode]/ensure: created[0m >2018-06-26 09:29:17,736 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[iptables/dnsmasq_interface]/ensure: created[0m >2018-06-26 09:29:17,741 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[processing/ramdisk_logs_dir]/ensure: created[0m >2018-06-26 09:29:17,750 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[processing/keep_ports]/ensure: created[0m >2018-06-26 09:29:17,754 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[processing/store_data]/ensure: created[0m >2018-06-26 09:29:17,759 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[ironic/auth_type]/ensure: created[0m >2018-06-26 09:29:17,764 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[ironic/username]/ensure: created[0m >2018-06-26 09:29:17,768 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[ironic/password]/ensure: created[0m >2018-06-26 09:29:17,773 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[ironic/project_name]/ensure: created[0m >2018-06-26 09:29:17,777 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[ironic/project_domain_name]/ensure: created[0m >2018-06-26 09:29:17,782 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[ironic/user_domain_name]/ensure: created[0m >2018-06-26 09:29:17,787 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[ironic/auth_url]/ensure: created[0m >2018-06-26 09:29:17,791 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[ironic/max_retries]/ensure: created[0m >2018-06-26 09:29:17,796 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[ironic/retry_interval]/ensure: created[0m >2018-06-26 09:29:17,800 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[swift/auth_type]/ensure: created[0m >2018-06-26 09:29:17,805 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[swift/username]/ensure: created[0m >2018-06-26 09:29:17,810 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[swift/password]/ensure: created[0m >2018-06-26 09:29:17,814 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[swift/project_name]/ensure: created[0m >2018-06-26 09:29:17,819 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[swift/project_domain_name]/ensure: created[0m >2018-06-26 09:29:17,824 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[swift/user_domain_name]/ensure: created[0m >2018-06-26 09:29:17,828 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[swift/auth_url]/ensure: created[0m >2018-06-26 09:29:17,833 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[processing/processing_hooks]/ensure: created[0m >2018-06-26 09:29:17,840 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[discovery/enroll_node_driver]/ensure: created[0m >2018-06-26 09:29:17,844 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Pxe_filter/Ironic_inspector_config[pxe_filter/driver]/ensure: created[0m >2018-06-26 09:29:17,851 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Pxe_filter::Dnsmasq/Ironic_inspector_config[dnsmasq_pxe_filter/dhcp_hostsdir]/ensure: created[0m >2018-06-26 09:29:17,870 INFO: [mNotice: /Stage[main]/Ironic::Pxe/File[/tftpboot]/ensure: created[0m >2018-06-26 09:29:17,872 INFO: [mNotice: /Stage[main]/Ironic::Pxe/File[/tftpboot/pxelinux.cfg]/ensure: created[0m >2018-06-26 09:29:17,874 INFO: [mNotice: /Stage[main]/Ironic::Inspector/File[/tftpboot/pxelinux.cfg/default]/ensure: defined content as '{md5}94e007e07d558c57d03f12a589d7500d'[0m >2018-06-26 09:29:17,885 INFO: [mNotice: /Stage[main]/Ironic::Pxe/File[/httpboot]/ensure: created[0m >2018-06-26 09:29:17,888 INFO: [mNotice: /Stage[main]/Ironic::Pxe/File[/tftpboot/map-file]/ensure: defined content as '{md5}1c4343c656b7f7b9de48495fdc2b6c5e'[0m >2018-06-26 09:29:18,045 INFO: [mNotice: /Stage[main]/Ironic::Pxe/File[/tftpboot/undionly.kpxe]/ensure: defined content as '{md5}60d84c8e9035fac59c73ed4cee8dc82c'[0m >2018-06-26 09:29:18,054 INFO: [mNotice: /Stage[main]/Ironic::Pxe/File[/tftpboot/ipxe.efi]/ensure: defined content as '{md5}8f49ea062dadf0290b5f8b7e5f42a9b9'[0m >2018-06-26 09:29:18,069 INFO: [mNotice: /Stage[main]/Ironic::Service_catalog/Ironic_config[service_catalog/auth_type]/ensure: created[0m >2018-06-26 09:29:18,084 INFO: [mNotice: /Stage[main]/Ironic::Service_catalog/Ironic_config[service_catalog/username]/ensure: created[0m >2018-06-26 09:29:18,098 INFO: [mNotice: /Stage[main]/Ironic::Service_catalog/Ironic_config[service_catalog/password]/ensure: created[0m >2018-06-26 09:29:18,112 INFO: [mNotice: /Stage[main]/Ironic::Service_catalog/Ironic_config[service_catalog/auth_url]/ensure: created[0m >2018-06-26 09:29:18,127 INFO: [mNotice: /Stage[main]/Ironic::Service_catalog/Ironic_config[service_catalog/project_name]/ensure: created[0m >2018-06-26 09:29:18,308 INFO: [mNotice: /Stage[main]/Ironic::Service_catalog/Ironic_config[service_catalog/user_domain_name]/ensure: created[0m >2018-06-26 09:29:18,322 INFO: [mNotice: /Stage[main]/Ironic::Service_catalog/Ironic_config[service_catalog/project_domain_name]/ensure: created[0m >2018-06-26 09:29:18,337 INFO: [mNotice: /Stage[main]/Ironic::Swift/Ironic_config[swift/auth_type]/ensure: created[0m >2018-06-26 09:29:18,351 INFO: [mNotice: /Stage[main]/Ironic::Swift/Ironic_config[swift/username]/ensure: created[0m >2018-06-26 09:29:18,365 INFO: [mNotice: /Stage[main]/Ironic::Swift/Ironic_config[swift/password]/ensure: created[0m >2018-06-26 09:29:18,380 INFO: [mNotice: /Stage[main]/Ironic::Swift/Ironic_config[swift/auth_url]/ensure: created[0m >2018-06-26 09:29:18,394 INFO: [mNotice: /Stage[main]/Ironic::Swift/Ironic_config[swift/project_name]/ensure: created[0m >2018-06-26 09:29:18,408 INFO: [mNotice: /Stage[main]/Ironic::Swift/Ironic_config[swift/user_domain_name]/ensure: created[0m >2018-06-26 09:29:18,423 INFO: [mNotice: /Stage[main]/Ironic::Swift/Ironic_config[swift/project_domain_name]/ensure: created[0m >2018-06-26 09:29:35,805 INFO: [mNotice: /Stage[main]/Main/Package[openstack-tempest]/ensure: created[0m >2018-06-26 09:29:45,878 INFO: [mNotice: /Stage[main]/Main/Package[subunit-filters]/ensure: created[0m >2018-06-26 09:29:45,898 INFO: [mNotice: /Stage[main]/Main/Group[docker]/ensure: created[0m >2018-06-26 09:29:45,923 INFO: [mNotice: /Stage[main]/Main/User[docker_user]/groups: groups changed '' to ['docker'][0m >2018-06-26 09:29:55,049 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Docker_registry/Package[docker-distribution]/ensure: created[0m >2018-06-26 09:29:55,057 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Docker_registry/File[/etc/docker-distribution/registry/config.yml]/content: content changed '{md5}fcc7b86bd3a8b9b41577e3af434de461' to '{md5}86aaf4ac5f48d110f467162ebc2341ee'[0m >2018-06-26 09:29:55,375 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Docker_registry/Service[docker-distribution]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:30:05,678 INFO: [mNotice: /Stage[main]/Mistral/Package[mistral-common]/ensure: created[0m >2018-06-26 09:30:16,845 INFO: [mNotice: /Stage[main]/Mistral::Api/Package[mistral-api]/ensure: created[0m >2018-06-26 09:30:48,474 INFO: [mNotice: /Stage[main]/Mistral::Engine/Package[mistral-engine]/ensure: created[0m >2018-06-26 09:30:58,180 INFO: [mNotice: /Stage[main]/Mistral::Executor/Package[mistral-executor]/ensure: created[0m >2018-06-26 09:30:58,181 INFO: [mNotice: /Stage[main]/Mistral::Deps/Anchor[mistral::install::end]: Triggered 'refresh' from 4 events[0m >2018-06-26 09:30:58,187 INFO: [mNotice: /Stage[main]/Mistral::Api/Mistral_config[api/api_workers]/ensure: created[0m >2018-06-26 09:30:58,188 INFO: [mNotice: /Stage[main]/Mistral::Api/Mistral_config[api/host]/ensure: created[0m >2018-06-26 09:30:58,191 INFO: [mNotice: /Stage[main]/Mistral::Engine/Mistral_config[engine/execution_field_size_limit_kb]/ensure: created[0m >2018-06-26 09:30:58,192 INFO: [mNotice: /Stage[main]/Mistral::Engine/Mistral_config[execution_expiration_policy/evaluation_interval]/ensure: created[0m >2018-06-26 09:30:58,193 INFO: [mNotice: /Stage[main]/Mistral::Engine/Mistral_config[execution_expiration_policy/older_than]/ensure: created[0m >2018-06-26 09:30:58,197 INFO: [mNotice: /Stage[main]/Mistral::Cron_trigger/Mistral_config[cron_trigger/execution_interval]/ensure: created[0m >2018-06-26 09:31:08,056 INFO: [mNotice: /Stage[main]/Tripleo::Ui/Package[openstack-tripleo-ui]/ensure: created[0m >2018-06-26 09:31:16,304 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Validations/Package[openstack-tripleo-validations]/ensure: created[0m >2018-06-26 09:31:16,389 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Validations/User[validations]/ensure: created[0m >2018-06-26 09:31:31,334 INFO: [mNotice: /Stage[main]/Zaqar/Package[zaqar-common]/ensure: created[0m >2018-06-26 09:31:31,336 INFO: [mNotice: /Stage[main]/Zaqar::Deps/Anchor[zaqar::install::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:31:31,346 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Trust/Zaqar_config[trustee/username]/ensure: created[0m >2018-06-26 09:31:31,360 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Trust/Zaqar_config[trustee/user_domain_name]/ensure: created[0m >2018-06-26 09:31:31,598 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Trust/Zaqar_config[trustee/auth_url]/ensure: created[0m >2018-06-26 09:31:31,612 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Trust/Zaqar_config[trustee/auth_type]/ensure: created[0m >2018-06-26 09:31:31,622 INFO: [mNotice: /Stage[main]/Zaqar/Zaqar_config[DEFAULT/auth_strategy]/ensure: created[0m >2018-06-26 09:31:31,636 INFO: [mNotice: /Stage[main]/Zaqar/Zaqar_config[DEFAULT/unreliable]/ensure: created[0m >2018-06-26 09:31:31,654 INFO: [mNotice: /Stage[main]/Zaqar/Zaqar_config[storage/message_pipeline]/ensure: created[0m >2018-06-26 09:31:31,672 INFO: [mNotice: /Stage[main]/Zaqar/Zaqar_config[transport/max_messages_post_size]/ensure: created[0m >2018-06-26 09:31:31,681 INFO: [mNotice: /Stage[main]/Zaqar/Zaqar_config[drivers/message_store]/ensure: created[0m >2018-06-26 09:31:31,690 INFO: [mNotice: /Stage[main]/Zaqar/Zaqar_config[drivers/management_store]/ensure: created[0m >2018-06-26 09:31:31,699 INFO: [mNotice: /Stage[main]/Zaqar::Management::Sqlalchemy/Zaqar_config[drivers:management_store:sqlalchemy/uri]/ensure: created[0m >2018-06-26 09:31:31,709 INFO: [mNotice: /Stage[main]/Zaqar::Messaging::Swift/Zaqar_config[drivers:message_store:swift/uri]/ensure: created[0m >2018-06-26 09:31:31,717 INFO: [mNotice: /Stage[main]/Zaqar::Messaging::Swift/Zaqar_config[drivers:message_store:swift/auth_url]/ensure: created[0m >2018-06-26 09:31:31,728 INFO: [mNotice: /Stage[main]/Zaqar::Transport::Websocket/Zaqar_config[drivers:transport:websocket/bind]/ensure: created[0m >2018-06-26 09:31:31,746 INFO: [mNotice: /Stage[main]/Zaqar::Transport::Websocket/Zaqar_config[drivers:transport:websocket/notification_bind]/ensure: created[0m >2018-06-26 09:31:40,775 INFO: [mNotice: /Stage[main]/Main/Package[firewalld]/ensure: purged[0m >2018-06-26 09:31:40,792 INFO: [mNotice: /Stage[main]/Main/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[net.ipv4.ip_forward]/ensure: created[0m >2018-06-26 09:31:57,286 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Docker/Package[docker]/ensure: created[0m >2018-06-26 09:31:57,290 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d]/ensure: created[0m >2018-06-26 09:31:57,294 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/ensure: defined content as '{md5}b984426de0b5978853686a649b64e4b8'[0m >2018-06-26 09:31:57,371 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Docker/Exec[systemd daemon-reload]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:31:57,512 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-options]/returns: executed successfully[0m >2018-06-26 09:31:57,568 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-registry]/returns: executed successfully[0m >2018-06-26 09:31:57,588 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]/returns: executed successfully[0m >2018-06-26 09:31:57,638 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-storage]/returns: executed successfully[0m >2018-06-26 09:31:57,692 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-network]/returns: executed successfully[0m >2018-06-26 09:31:59,951 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Docker/Service[docker]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:32:00,090 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created[0m >2018-06-26 09:32:00,422 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created[0m >2018-06-26 09:32:00,551 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[106 vrrp]/Firewall[106 vrrp ipv4]/ensure: created[0m >2018-06-26 09:32:00,878 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[106 vrrp]/Firewall[106 vrrp ipv6]/ensure: created[0m >2018-06-26 09:32:01,005 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/ensure: created[0m >2018-06-26 09:32:01,095 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/ensure: created[0m >2018-06-26 09:32:01,427 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[108 redis]/Firewall[108 redis ipv4]/ensure: created[0m >2018-06-26 09:32:01,520 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[108 redis]/Firewall[108 redis ipv6]/ensure: created[0m >2018-06-26 09:32:01,845 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[110 ceph]/Firewall[110 ceph ipv4]/ensure: created[0m >2018-06-26 09:32:02,136 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[110 ceph]/Firewall[110 ceph ipv6]/ensure: created[0m >2018-06-26 09:32:02,287 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/ensure: created[0m >2018-06-26 09:32:02,587 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/ensure: created[0m >2018-06-26 09:32:02,738 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[112 glance]/Firewall[112 glance ipv4]/ensure: created[0m >2018-06-26 09:32:03,036 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[112 glance]/Firewall[112 glance ipv6]/ensure: created[0m >2018-06-26 09:32:03,375 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[113 nova]/Firewall[113 nova ipv4]/ensure: created[0m >2018-06-26 09:32:03,490 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[113 nova]/Firewall[113 nova ipv6]/ensure: created[0m >2018-06-26 09:32:03,846 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[114 neutron server]/Firewall[114 neutron server ipv4]/ensure: created[0m >2018-06-26 09:32:03,955 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[114 neutron server]/Firewall[114 neutron server ipv6]/ensure: created[0m >2018-06-26 09:32:04,290 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/ensure: created[0m >2018-06-26 09:32:04,601 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/ensure: created[0m >2018-06-26 09:32:04,968 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/ensure: created[0m >2018-06-26 09:32:05,096 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/ensure: created[0m >2018-06-26 09:32:05,432 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created[0m >2018-06-26 09:32:05,779 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created[0m >2018-06-26 09:32:06,129 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/ensure: created[0m >2018-06-26 09:32:06,261 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/ensure: created[0m >2018-06-26 09:32:06,617 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/ensure: created[0m >2018-06-26 09:32:06,961 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/ensure: created[0m >2018-06-26 09:32:07,339 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/ensure: created[0m >2018-06-26 09:32:07,716 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/ensure: created[0m >2018-06-26 09:32:08,066 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/ensure: created[0m >2018-06-26 09:32:08,451 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/ensure: created[0m >2018-06-26 09:32:08,776 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/ensure: created[0m >2018-06-26 09:32:09,150 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[125 heat]/Firewall[125 heat ipv4]/ensure: created[0m >2018-06-26 09:32:09,500 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[125 heat]/Firewall[125 heat ipv6]/ensure: created[0m >2018-06-26 09:32:09,900 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[126 horizon]/Firewall[126 horizon ipv4]/ensure: created[0m >2018-06-26 09:32:10,244 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[126 horizon]/Firewall[126 horizon ipv6]/ensure: created[0m >2018-06-26 09:32:10,634 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[127 snmp]/Firewall[127 snmp ipv4]/ensure: created[0m >2018-06-26 09:32:10,978 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[127 snmp]/Firewall[127 snmp ipv6]/ensure: created[0m >2018-06-26 09:32:11,390 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[128 aodh]/Firewall[128 aodh ipv4]/ensure: created[0m >2018-06-26 09:32:11,718 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[128 aodh]/Firewall[128 aodh ipv6]/ensure: created[0m >2018-06-26 09:32:12,121 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/ensure: created[0m >2018-06-26 09:32:12,478 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/ensure: created[0m >2018-06-26 09:32:12,883 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[130 tftp]/Firewall[130 tftp ipv4]/ensure: created[0m >2018-06-26 09:32:13,251 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[130 tftp]/Firewall[130 tftp ipv6]/ensure: created[0m >2018-06-26 09:32:13,648 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[131 novnc]/Firewall[131 novnc ipv4]/ensure: created[0m >2018-06-26 09:32:13,997 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[131 novnc]/Firewall[131 novnc ipv6]/ensure: created[0m >2018-06-26 09:32:14,595 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[132 mistral]/Firewall[132 mistral ipv4]/ensure: created[0m >2018-06-26 09:32:14,954 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[132 mistral]/Firewall[132 mistral ipv6]/ensure: created[0m >2018-06-26 09:32:15,375 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[133 zaqar]/Firewall[133 zaqar ipv4]/ensure: created[0m >2018-06-26 09:32:15,728 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[133 zaqar]/Firewall[133 zaqar ipv6]/ensure: created[0m >2018-06-26 09:32:16,329 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[134 zaqar websockets]/Firewall[134 zaqar websockets ipv4]/ensure: created[0m >2018-06-26 09:32:16,697 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[134 zaqar websockets]/Firewall[134 zaqar websockets ipv6]/ensure: created[0m >2018-06-26 09:32:17,102 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[135 ironic]/Firewall[135 ironic ipv4]/ensure: created[0m >2018-06-26 09:32:17,483 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[135 ironic]/Firewall[135 ironic ipv6]/ensure: created[0m >2018-06-26 09:32:18,090 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[136 trove]/Firewall[136 trove ipv4]/ensure: created[0m >2018-06-26 09:32:18,456 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[136 trove]/Firewall[136 trove ipv6]/ensure: created[0m >2018-06-26 09:32:18,885 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[137 ironic-inspector]/Firewall[137 ironic-inspector ipv4]/ensure: created[0m >2018-06-26 09:32:19,268 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[137 ironic-inspector]/Firewall[137 ironic-inspector ipv6]/ensure: created[0m >2018-06-26 09:32:19,894 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[138 docker registry]/Firewall[138 docker registry ipv4]/ensure: created[0m >2018-06-26 09:32:20,286 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[138 docker registry]/Firewall[138 docker registry ipv6]/ensure: created[0m >2018-06-26 09:32:20,935 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[139 apache vhost]/Firewall[139 apache vhost ipv4]/ensure: created[0m >2018-06-26 09:32:21,324 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[139 apache vhost]/Firewall[139 apache vhost ipv6]/ensure: created[0m >2018-06-26 09:32:21,964 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[140 destination ctlplane-subnet cidr nat]/Firewall[140 destination ctlplane-subnet cidr nat ipv4]/ensure: created[0m >2018-06-26 09:32:22,404 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[140 source ctlplane-subnet cidr nat]/Firewall[140 source ctlplane-subnet cidr nat ipv4]/ensure: created[0m >2018-06-26 09:32:23,022 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[142 tripleo-ui]/Firewall[142 tripleo-ui ipv4]/ensure: created[0m >2018-06-26 09:32:23,416 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[142 tripleo-ui]/Firewall[142 tripleo-ui ipv6]/ensure: created[0m >2018-06-26 09:32:24,038 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[143 panko-api]/Firewall[143 panko-api ipv4]/ensure: created[0m >2018-06-26 09:32:24,419 INFO: [mNotice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Rule[143 panko-api]/Firewall[143 panko-api ipv6]/ensure: created[0m >2018-06-26 09:32:25,075 INFO: [mNotice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created[0m >2018-06-26 09:32:25,482 INFO: [mNotice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created[0m >2018-06-26 09:32:26,170 INFO: [mNotice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created[0m >2018-06-26 09:32:26,776 INFO: [mNotice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created[0m >2018-06-26 09:32:27,244 INFO: [mNotice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created[0m >2018-06-26 09:32:27,844 INFO: [mNotice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created[0m >2018-06-26 09:32:28,524 INFO: [mNotice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created[0m >2018-06-26 09:32:28,931 INFO: [mNotice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created[0m >2018-06-26 09:32:29,332 INFO: [mNotice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created[0m >2018-06-26 09:32:29,376 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created[0m >2018-06-26 09:32:29,390 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl_runtime[net.ipv4.ip_nonlocal_bind]/val: val changed '0' to '1'[0m >2018-06-26 09:32:29,392 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created[0m >2018-06-26 09:32:29,401 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl_runtime[net.ipv6.ip_nonlocal_bind]/val: val changed '0' to '1'[0m >2018-06-26 09:32:29,420 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Db::Mysql/Openstacklib::Db::Mysql[ironic-inspector]/Mysql_database[ironic-inspector]/ensure: created[0m >2018-06-26 09:32:29,688 INFO: [mNotice: /Stage[main]/Glance::Policy/Oslo::Policy[glance_api_config]/Glance_api_config[oslo_policy/policy_file]/ensure: created[0m >2018-06-26 09:32:29,715 INFO: [mNotice: /Stage[main]/Glance::Policy/Oslo::Policy[glance_registry_config]/Glance_registry_config[oslo_policy/policy_file]/ensure: created[0m >2018-06-26 09:32:29,756 INFO: [mNotice: /Stage[main]/Glance::Api::Db/Oslo::Db[glance_api_config]/Glance_api_config[database/connection]/ensure: created[0m >2018-06-26 09:32:30,087 INFO: [mNotice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/debug]/ensure: created[0m >2018-06-26 09:32:30,120 INFO: [mNotice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_file]/ensure: created[0m >2018-06-26 09:32:30,138 INFO: [mNotice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_dir]/ensure: created[0m >2018-06-26 09:32:30,275 INFO: [mNotice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/debug]/ensure: created[0m >2018-06-26 09:32:30,293 INFO: [mNotice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_file]/ensure: created[0m >2018-06-26 09:32:30,303 INFO: [mNotice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_dir]/ensure: created[0m >2018-06-26 09:32:30,548 INFO: [mNotice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_uri]/ensure: created[0m >2018-06-26 09:32:30,564 INFO: [mNotice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_type]/ensure: created[0m >2018-06-26 09:32:30,957 INFO: [mNotice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_url]/ensure: created[0m >2018-06-26 09:32:30,975 INFO: [mNotice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/username]/ensure: created[0m >2018-06-26 09:32:30,993 INFO: [mNotice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/password]/ensure: created[0m >2018-06-26 09:32:31,010 INFO: [mNotice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/user_domain_name]/ensure: created[0m >2018-06-26 09:32:31,028 INFO: [mNotice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_name]/ensure: created[0m >2018-06-26 09:32:31,046 INFO: [mNotice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_domain_name]/ensure: created[0m >2018-06-26 09:32:31,082 INFO: [mNotice: /Stage[main]/Glance::Api/Oslo::Middleware[glance_api_config]/Glance_api_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created[0m >2018-06-26 09:32:31,367 INFO: [mNotice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Rabbit[glance_api_config]/Glance_api_config[oslo_messaging_rabbit/rabbit_password]/ensure: created[0m >2018-06-26 09:32:31,423 INFO: [mNotice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Rabbit[glance_api_config]/Glance_api_config[oslo_messaging_rabbit/rabbit_userid]/ensure: created[0m >2018-06-26 09:32:31,479 INFO: [mNotice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Rabbit[glance_api_config]/Glance_api_config[oslo_messaging_rabbit/rabbit_host]/ensure: created[0m >2018-06-26 09:32:31,591 INFO: [mNotice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Rabbit[glance_registry_config]/Glance_registry_config[oslo_messaging_rabbit/rabbit_password]/ensure: created[0m >2018-06-26 09:32:31,622 INFO: [mNotice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Rabbit[glance_registry_config]/Glance_registry_config[oslo_messaging_rabbit/rabbit_userid]/ensure: created[0m >2018-06-26 09:32:31,890 INFO: [mNotice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Rabbit[glance_registry_config]/Glance_registry_config[oslo_messaging_rabbit/rabbit_host]/ensure: created[0m >2018-06-26 09:32:32,005 INFO: [mNotice: /Stage[main]/Glance::Deps/Anchor[glance::config::end]: Triggered 'refresh' from 47 events[0m >2018-06-26 09:32:32,023 INFO: [mNotice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]/ensure: created[0m >2018-06-26 09:32:41,125 INFO: [mNotice: /Stage[main]/Nova::Api/Nova::Generic_service[api]/Package[nova-api]/ensure: created[0m >2018-06-26 09:32:49,779 INFO: [mNotice: /Stage[main]/Nova::Wsgi::Apache_placement/Nova::Generic_service[placement-api]/Package[nova-placement-api]/ensure: created[0m >2018-06-26 09:32:49,781 INFO: [mNotice: /Stage[main]/Keystone::Deps/Anchor[keystone::install::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:32:49,795 INFO: [mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_token]/ensure: created[0m >2018-06-26 09:32:49,807 INFO: [mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_bind_host]/ensure: created[0m >2018-06-26 09:32:49,819 INFO: [mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_bind_host]/ensure: created[0m >2018-06-26 09:32:49,830 INFO: [mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_port]/ensure: created[0m >2018-06-26 09:32:49,842 INFO: [mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_port]/ensure: created[0m >2018-06-26 09:32:49,871 INFO: [mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_endpoint]/ensure: created[0m >2018-06-26 09:32:50,093 INFO: [mNotice: /Stage[main]/Keystone/Keystone_config[token/driver]/ensure: created[0m >2018-06-26 09:32:50,104 INFO: [mNotice: /Stage[main]/Keystone/Keystone_config[token/expiration]/ensure: created[0m >2018-06-26 09:32:50,126 INFO: [mNotice: /Stage[main]/Keystone/Keystone_config[ssl/enable]/ensure: created[0m >2018-06-26 09:32:50,170 INFO: [mNotice: /Stage[main]/Keystone/Keystone_config[catalog/driver]/ensure: created[0m >2018-06-26 09:32:50,181 INFO: [mNotice: /Stage[main]/Keystone/Keystone_config[catalog/template_file]/ensure: created[0m >2018-06-26 09:32:50,226 INFO: [mNotice: /Stage[main]/Keystone/Keystone_config[token/provider]/ensure: created[0m >2018-06-26 09:32:50,249 INFO: [mNotice: /Stage[main]/Keystone/Keystone_config[eventlet_server/admin_workers]/ensure: created[0m >2018-06-26 09:32:50,260 INFO: [mNotice: /Stage[main]/Keystone/Keystone_config[eventlet_server/public_workers]/ensure: created[0m >2018-06-26 09:32:50,265 INFO: [mNotice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys]/ensure: created[0m >2018-06-26 09:32:50,267 INFO: [mNotice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys]/ensure: created[0m >2018-06-26 09:32:50,278 INFO: [mNotice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/key_repository]/ensure: created[0m >2018-06-26 09:32:50,289 INFO: [mNotice: /Stage[main]/Keystone/Keystone_config[token/revoke_by_id]/ensure: created[0m >2018-06-26 09:32:50,301 INFO: [mNotice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/max_active_keys]/ensure: created[0m >2018-06-26 09:32:50,313 INFO: [mNotice: /Stage[main]/Keystone/Keystone_config[credential/key_repository]/ensure: created[0m >2018-06-26 09:32:50,320 INFO: [mNotice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{md5}9da85e58f3bd6c780ce76db603b7f028'[0m >2018-06-26 09:32:50,323 INFO: [mNotice: /Stage[main]/Apache::Mod::Mime_magic/File[mime_magic.conf]/ensure: defined content as '{md5}b258529b332429e2ff8344f726a95457'[0m >2018-06-26 09:32:50,545 INFO: [mNotice: /Stage[main]/Apache::Mod::Alias/File[alias.conf]/ensure: defined content as '{md5}983e865be85f5e0daaed7433db82995e'[0m >2018-06-26 09:32:50,548 INFO: [mNotice: /Stage[main]/Apache::Mod::Autoindex/File[autoindex.conf]/ensure: defined content as '{md5}2421a3c6df32c7e38c2a7a22afdf5728'[0m >2018-06-26 09:32:50,551 INFO: [mNotice: /Stage[main]/Apache::Mod::Deflate/File[deflate.conf]/ensure: defined content as '{md5}a045d750d819b1e9dae3fbfb3f20edd5'[0m >2018-06-26 09:32:50,553 INFO: [mNotice: /Stage[main]/Apache::Mod::Dir/File[dir.conf]/ensure: defined content as '{md5}c741d8ea840e6eb999d739eed47c69d7'[0m >2018-06-26 09:32:50,556 INFO: [mNotice: /Stage[main]/Apache::Mod::Negotiation/File[negotiation.conf]/ensure: defined content as '{md5}47284b5580b986a6ba32580b6ffb9fd7'[0m >2018-06-26 09:32:50,558 INFO: [mNotice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{md5}c7ede4173da1915b7ec088201f030c28'[0m >2018-06-26 09:32:50,561 INFO: [mNotice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{md5}f58b0483b70b4e73b5f67ff37b8f24a0'[0m >2018-06-26 09:32:50,565 INFO: [mNotice: /Stage[main]/Keystone::Wsgi::Apache/File[/var/www/cgi-bin/keystone]/ensure: created[0m >2018-06-26 09:32:50,569 INFO: [mNotice: /Stage[main]/Keystone::Wsgi::Apache/File[keystone_wsgi_admin]/ensure: defined content as '{md5}d6dda52b0e14d80a652ecf42686d3962'[0m >2018-06-26 09:32:50,572 INFO: [mNotice: /Stage[main]/Keystone::Wsgi::Apache/File[keystone_wsgi_main]/ensure: defined content as '{md5}072422f0d75777ed1783e6910b3ddc58'[0m >2018-06-26 09:32:50,574 INFO: [mNotice: /Stage[main]/Keystone::Cron::Token_flush/Cron[keystone-manage token_flush]/ensure: created[0m >2018-06-26 09:32:50,601 INFO: [mNotice: /Stage[main]/Main/Keystone_config[ec2/driver]/ensure: created[0m >2018-06-26 09:32:50,604 INFO: [mNotice: /Stage[main]/Apache::Mod::Proxy/File[proxy.conf]/ensure: defined content as '{md5}9eab682d8c4c89abd0ff20c1a60b908d'[0m >2018-06-26 09:32:50,617 INFO: [mNotice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/debug]/ensure: created[0m >2018-06-26 09:32:50,646 INFO: [mNotice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/log_dir]/ensure: created[0m >2018-06-26 09:32:51,020 INFO: [mNotice: /Stage[main]/Keystone::Policy/Oslo::Policy[keystone_config]/Keystone_config[oslo_policy/policy_file]/ensure: created[0m >2018-06-26 09:32:51,056 INFO: [mNotice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/connection]/ensure: created[0m >2018-06-26 09:32:51,473 INFO: [mNotice: /Stage[main]/Keystone/Oslo::Middleware[keystone_config]/Keystone_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created[0m >2018-06-26 09:32:51,502 INFO: [mNotice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/driver]/ensure: created[0m >2018-06-26 09:32:51,518 INFO: [mNotice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/topics]/ensure: created[0m >2018-06-26 09:32:51,582 INFO: [mNotice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/rabbit_password]/ensure: created[0m >2018-06-26 09:32:51,618 INFO: [mNotice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/rabbit_userid]/ensure: created[0m >2018-06-26 09:32:51,653 INFO: [mNotice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/rabbit_host]/ensure: created[0m >2018-06-26 09:32:51,891 INFO: [mNotice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}e7b47d50ab7af8c85e274e2d23d1387d'[0m >2018-06-26 09:32:51,904 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{md5}f5e7449c0f17bc856e86011cb5d152ba' to '{md5}d2f45a388bf8b6e9c58ef3ef0b9dc011'[0m >2018-06-26 09:32:51,907 INFO: [mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{md5}785d35cb285e190d589163b45263ca89'[0m >2018-06-26 09:32:51,910 INFO: [mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{md5}26e5d44aae258b3e9d821cbbbd3e2826'[0m >2018-06-26 09:32:51,913 INFO: [mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{md5}0e8468ecc1265f8947b8725f4d1be9c0'[0m >2018-06-26 09:32:51,916 INFO: [mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{md5}d1045f54d2798499ca0f030ca0eef920'[0m >2018-06-26 09:32:51,919 INFO: [mNotice: /Stage[main]/Apache::Mod::Actions/Apache::Mod[actions]/File[actions.load]/ensure: defined content as '{md5}599866dfaf734f60f7e2d41ee8235515'[0m >2018-06-26 09:32:51,923 INFO: [mNotice: /Stage[main]/Apache::Mod::Authn_core/Apache::Mod[authn_core]/File[authn_core.load]/ensure: defined content as '{md5}704d6e8b02b0eca0eba4083960d16c52'[0m >2018-06-26 09:32:51,926 INFO: [mNotice: /Stage[main]/Apache::Mod::Cache/Apache::Mod[cache]/File[cache.load]/ensure: defined content as '{md5}01e4d392225b518a65b0f7d6c4e21d29'[0m >2018-06-26 09:32:51,929 INFO: [mNotice: /Stage[main]/Apache::Mod::Ext_filter/Apache::Mod[ext_filter]/File[ext_filter.load]/ensure: defined content as '{md5}76d5e0ac3411a4be57ac33ebe2e52ac8'[0m >2018-06-26 09:32:51,932 INFO: [mNotice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{md5}e36257b9efab01459141d423cae57c7c'[0m >2018-06-26 09:32:51,935 INFO: [mNotice: /Stage[main]/Apache::Mod::Mime_magic/Apache::Mod[mime_magic]/File[mime_magic.load]/ensure: defined content as '{md5}cb8670bb2fb352aac7ebf3a85d52094c'[0m >2018-06-26 09:32:51,939 INFO: [mNotice: /Stage[main]/Apache::Mod::Rewrite/Apache::Mod[rewrite]/File[rewrite.load]/ensure: defined content as '{md5}26e2683352fc1599f29573ff0d934e79'[0m >2018-06-26 09:32:51,942 INFO: [mNotice: /Stage[main]/Apache::Mod::Speling/Apache::Mod[speling]/File[speling.load]/ensure: defined content as '{md5}f82e9e6b871a276c324c9eeffcec8a61'[0m >2018-06-26 09:32:51,946 INFO: [mNotice: /Stage[main]/Apache::Mod::Suexec/Apache::Mod[suexec]/File[suexec.load]/ensure: defined content as '{md5}c7d5c61c534ba423a79b0ae78ff9be35'[0m >2018-06-26 09:32:51,949 INFO: [mNotice: /Stage[main]/Apache::Mod::Version/Apache::Mod[version]/File[version.load]/ensure: defined content as '{md5}1c9243de22ace4dc8266442c48ae0c92'[0m >2018-06-26 09:32:51,952 INFO: [mNotice: /Stage[main]/Apache::Mod::Vhost_alias/Apache::Mod[vhost_alias]/File[vhost_alias.load]/ensure: defined content as '{md5}eca907865997d50d5130497665c3f82e'[0m >2018-06-26 09:32:51,956 INFO: [mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_digest]/File[auth_digest.load]/ensure: defined content as '{md5}df9e85f8da0b239fe8e698ae7ead4f60'[0m >2018-06-26 09:32:51,959 INFO: [mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_anon]/File[authn_anon.load]/ensure: defined content as '{md5}bf57b94b5aec35476fc2a2dc3861f132'[0m >2018-06-26 09:32:51,962 INFO: [mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_dbm]/File[authn_dbm.load]/ensure: defined content as '{md5}90ee8f8ef1a017cacadfda4225e10651'[0m >2018-06-26 09:32:51,965 INFO: [mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_dbm]/File[authz_dbm.load]/ensure: defined content as '{md5}c1363277984d22f99b70f7dce8753b60'[0m >2018-06-26 09:32:51,968 INFO: [mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_owner]/File[authz_owner.load]/ensure: defined content as '{md5}f30a9be1016df87f195449d9e02d1857'[0m >2018-06-26 09:32:51,971 INFO: [mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[expires]/File[expires.load]/ensure: defined content as '{md5}f0825bad1e470de86ffabeb86dcc5d95'[0m >2018-06-26 09:32:51,975 INFO: [mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[include]/File[include.load]/ensure: defined content as '{md5}88095a914eedc3c2c184dd5d74c3954c'[0m >2018-06-26 09:32:51,978 INFO: [mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[logio]/File[logio.load]/ensure: defined content as '{md5}084533c7a44e9129d0e6df952e2472b6'[0m >2018-06-26 09:32:51,981 INFO: [mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[substitute]/File[substitute.load]/ensure: defined content as '{md5}8077c34a71afcf41c8fc644830935915'[0m >2018-06-26 09:32:51,984 INFO: [mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[usertrack]/File[usertrack.load]/ensure: defined content as '{md5}e95fbbf030fabec98b948f8dc217775c'[0m >2018-06-26 09:32:51,987 INFO: [mNotice: /Stage[main]/Apache::Mod::Alias/Apache::Mod[alias]/File[alias.load]/ensure: defined content as '{md5}3cf2fa309ccae4c29a4b875d0894cd79'[0m >2018-06-26 09:32:52,186 INFO: [mNotice: /Stage[main]/Apache::Mod::Authn_file/Apache::Mod[authn_file]/File[authn_file.load]/ensure: defined content as '{md5}d41656680003d7b890267bb73621c60b'[0m >2018-06-26 09:32:52,189 INFO: [mNotice: /Stage[main]/Apache::Mod::Autoindex/Apache::Mod[autoindex]/File[autoindex.load]/ensure: defined content as '{md5}515cdf5b573e961a60d2931d39248648'[0m >2018-06-26 09:32:52,193 INFO: [mNotice: /Stage[main]/Apache::Mod::Dav/Apache::Mod[dav]/File[dav.load]/ensure: defined content as '{md5}588e496251838c4840c14b28b5aa7881'[0m >2018-06-26 09:32:52,196 INFO: [mNotice: /Stage[main]/Apache::Mod::Dav_fs/File[dav_fs.conf]/ensure: defined content as '{md5}899a57534f3d84efa81887ec93c90c9b'[0m >2018-06-26 09:32:52,199 INFO: [mNotice: /Stage[main]/Apache::Mod::Dav_fs/Apache::Mod[dav_fs]/File[dav_fs.load]/ensure: defined content as '{md5}2996277c73b1cd684a9a3111c355e0d3'[0m >2018-06-26 09:32:52,202 INFO: [mNotice: /Stage[main]/Apache::Mod::Deflate/Apache::Mod[deflate]/File[deflate.load]/ensure: defined content as '{md5}2d1a1afcae0c70557251829a8586eeaf'[0m >2018-06-26 09:32:52,205 INFO: [mNotice: /Stage[main]/Apache::Mod::Dir/Apache::Mod[dir]/File[dir.load]/ensure: defined content as '{md5}1bfb1c2a46d7351fc9eb47c659dee068'[0m >2018-06-26 09:32:52,209 INFO: [mNotice: /Stage[main]/Apache::Mod::Negotiation/Apache::Mod[negotiation]/File[negotiation.load]/ensure: defined content as '{md5}d262ee6a5f20d9dd7f87770638dc2ccd'[0m >2018-06-26 09:32:52,212 INFO: [mNotice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{md5}ec6c99f7cc8e35bdbcf8028f652c9f6d'[0m >2018-06-26 09:32:52,215 INFO: [mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_basic]/File[auth_basic.load]/ensure: defined content as '{md5}494bcf4b843f7908675d663d8dc1bdc8'[0m >2018-06-26 09:32:52,218 INFO: [mNotice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{md5}66a1e2064a140c3e7dca7ac33877700e'[0m >2018-06-26 09:32:52,221 INFO: [mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{md5}39942569bff2abdb259f9a347c7246bc'[0m >2018-06-26 09:32:52,224 INFO: [mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[access_compat]/File[access_compat.load]/ensure: defined content as '{md5}d5feb88bec4570e2dbc41cce7e0de003'[0m >2018-06-26 09:32:52,227 INFO: [mNotice: /Stage[main]/Apache::Mod::Authz_user/Apache::Mod[authz_user]/File[authz_user.load]/ensure: defined content as '{md5}63594303ee808423679b1ea13dd5a784'[0m >2018-06-26 09:32:52,230 INFO: [mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_groupfile]/File[authz_groupfile.load]/ensure: defined content as '{md5}ae005a36b3ac8c20af36c434561c8a75'[0m >2018-06-26 09:32:52,234 INFO: [mNotice: /Stage[main]/Apache::Mod::Env/Apache::Mod[env]/File[env.load]/ensure: defined content as '{md5}d74184d40d0ee24ba02626a188ee7e1a'[0m >2018-06-26 09:32:52,237 INFO: [mNotice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{md5}157529aafcf03fa491bc924103e4608e'[0m >2018-06-26 09:32:52,241 INFO: [mNotice: /Stage[main]/Apache::Mod::Cgi/Apache::Mod[cgi]/File[cgi.load]/ensure: defined content as '{md5}ac20c5c5779b37ab06b480d6485a0881'[0m >2018-06-26 09:32:52,276 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed[0m >2018-06-26 09:32:52,277 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed[0m >2018-06-26 09:32:52,284 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.d/lookup_identity.conf]/ensure: removed[0m >2018-06-26 09:32:52,285 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.d/nss.conf]/ensure: removed[0m >2018-06-26 09:32:52,287 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed[0m >2018-06-26 09:32:52,289 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed[0m >2018-06-26 09:32:52,294 INFO: [mNotice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{md5}8b3feb3fc2563de439920bb2c52cbd11'[0m >2018-06-26 09:32:52,297 INFO: [mNotice: /Stage[main]/Nova::Wsgi::Apache_placement/File[/etc/httpd/conf.d/00-nova-placement-api.conf]/content: content changed '{md5}611e31d39e1635bfabc0aafc51b43d0b' to '{md5}612d455490cfecc4b51db6656ea39240'[0m >2018-06-26 09:32:52,301 INFO: [mNotice: /Stage[main]/Tripleo::Ui/File[/etc/httpd/conf.d/openstack-tripleo-ui.conf]/content: content changed '{md5}0bb5ccf9a90544699ec07adf8028d99a' to '{md5}ec9dfa67b5507ef6f7a8bba6345bc07d'[0m >2018-06-26 09:32:52,500 INFO: [mNotice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{md5}e1795e051e7aae1f865fde0d3b86a507'[0m >2018-06-26 09:32:52,515 INFO: [mNotice: /Stage[main]/Keystone::Cors/Oslo::Cors[keystone_config]/Keystone_config[cors/allowed_origin]/ensure: created[0m >2018-06-26 09:32:52,547 INFO: [mNotice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created[0m >2018-06-26 09:32:52,550 INFO: [mNotice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[nova_api_wsgi]/ensure: defined content as '{md5}8bcfb466d72544dd31a4f339243ed669'[0m >2018-06-26 09:32:52,554 INFO: [mNotice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[placement_wsgi]/ensure: defined content as '{md5}2c992c50344eb1765282cb9fb70126db'[0m >2018-06-26 09:33:01,389 INFO: [mNotice: /Stage[main]/Nova::Conductor/Nova::Generic_service[conductor]/Package[nova-conductor]/ensure: created[0m >2018-06-26 09:33:09,893 INFO: [mNotice: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Package[nova-scheduler]/ensure: created[0m >2018-06-26 09:33:19,572 INFO: [mNotice: /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Package[nova-compute]/ensure: created[0m >2018-06-26 09:33:19,573 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::install::end]: Triggered 'refresh' from 7 events[0m >2018-06-26 09:33:19,617 INFO: [mNotice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created[0m >2018-06-26 09:33:19,652 INFO: [mNotice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created[0m >2018-06-26 09:33:19,894 INFO: [mNotice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created[0m >2018-06-26 09:33:19,929 INFO: [mNotice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created[0m >2018-06-26 09:33:19,952 INFO: [mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/image_service]/ensure: created[0m >2018-06-26 09:33:20,005 INFO: [mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created[0m >2018-06-26 09:33:20,330 INFO: [mNotice: /Stage[main]/Nova/Nova_config[notifications/notify_api_faults]/ensure: created[0m >2018-06-26 09:33:20,369 INFO: [mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created[0m >2018-06-26 09:33:20,580 INFO: [mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created[0m >2018-06-26 09:33:20,601 INFO: [mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created[0m >2018-06-26 09:33:20,623 INFO: [mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created[0m >2018-06-26 09:33:20,674 INFO: [mNotice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created[0m >2018-06-26 09:33:21,110 INFO: [mNotice: /Stage[main]/Nova::Api/Nova_config[wsgi/api_paste_config]/ensure: created[0m >2018-06-26 09:33:21,354 INFO: [mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/enabled_apis]/ensure: created[0m >2018-06-26 09:33:21,380 INFO: [mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen]/ensure: created[0m >2018-06-26 09:33:21,407 INFO: [mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen]/ensure: created[0m >2018-06-26 09:33:21,434 INFO: [mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen_port]/ensure: created[0m >2018-06-26 09:33:21,461 INFO: [mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen_port]/ensure: created[0m >2018-06-26 09:33:21,488 INFO: [mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_volume_listen]/ensure: created[0m >2018-06-26 09:33:21,515 INFO: [mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_workers]/ensure: created[0m >2018-06-26 09:33:21,541 INFO: [mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_workers]/ensure: created[0m >2018-06-26 09:33:21,836 INFO: [mNotice: /Stage[main]/Nova::Api/Nova_config[api/use_forwarded_for]/ensure: created[0m >2018-06-26 09:33:21,858 INFO: [mNotice: /Stage[main]/Nova::Api/Nova_config[api/fping_path]/ensure: created[0m >2018-06-26 09:33:22,660 INFO: [mNotice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created[0m >2018-06-26 09:33:22,699 INFO: [mNotice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created[0m >2018-06-26 09:33:22,946 INFO: [mNotice: /Stage[main]/Nova::Api/Nova_config[neutron/service_metadata_proxy]/ensure: created[0m >2018-06-26 09:33:23,015 INFO: [mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created[0m >2018-06-26 09:33:23,038 INFO: [mNotice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created[0m >2018-06-26 09:33:23,061 INFO: [mNotice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created[0m >2018-06-26 09:33:23,299 INFO: [mNotice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created[0m >2018-06-26 09:33:23,321 INFO: [mNotice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created[0m >2018-06-26 09:33:23,343 INFO: [mNotice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created[0m >2018-06-26 09:33:23,365 INFO: [mNotice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created[0m >2018-06-26 09:33:23,387 INFO: [mNotice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created[0m >2018-06-26 09:33:23,410 INFO: [mNotice: /Stage[main]/Nova::Placement/Nova_config[placement/os_region_name]/ensure: created[0m >2018-06-26 09:33:23,664 INFO: [mNotice: /Stage[main]/Nova::Conductor/Nova_config[conductor/workers]/ensure: created[0m >2018-06-26 09:33:23,701 INFO: [mNotice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/driver]/ensure: created[0m >2018-06-26 09:33:23,738 INFO: [mNotice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[scheduler/host_manager]/ensure: created[0m >2018-06-26 09:33:23,759 INFO: [mNotice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[scheduler/max_attempts]/ensure: created[0m >2018-06-26 09:33:23,797 INFO: [mNotice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/host_subset_size]/ensure: created[0m >2018-06-26 09:33:24,035 INFO: [mNotice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_io_ops_per_host]/ensure: created[0m >2018-06-26 09:33:24,057 INFO: [mNotice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_instances_per_host]/ensure: created[0m >2018-06-26 09:33:24,094 INFO: [mNotice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/available_filters]/ensure: created[0m >2018-06-26 09:33:24,115 INFO: [mNotice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/weight_classes]/ensure: created[0m >2018-06-26 09:33:24,135 INFO: [mNotice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/use_baremetal_filters]/ensure: created[0m >2018-06-26 09:33:24,157 INFO: [mNotice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/enabled_filters]/ensure: created[0m >2018-06-26 09:33:24,852 INFO: [mNotice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created[0m >2018-06-26 09:33:24,882 INFO: [mNotice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/heal_instance_info_cache_interval]/ensure: created[0m >2018-06-26 09:33:25,218 INFO: [mNotice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created[0m >2018-06-26 09:33:25,507 INFO: [mNotice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created[0m >2018-06-26 09:33:25,559 INFO: [mNotice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created[0m >2018-06-26 09:33:25,582 INFO: [mNotice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_config_drive]/ensure: created[0m >2018-06-26 09:33:25,604 INFO: [mNotice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created[0m >2018-06-26 09:33:25,626 INFO: [mNotice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created[0m >2018-06-26 09:33:25,855 INFO: [mNotice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created[0m >2018-06-26 09:33:25,915 INFO: [mNotice: /Stage[main]/Main/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created[0m >2018-06-26 09:33:25,937 INFO: [mNotice: /Stage[main]/Nova::Ironic::Common/Nova_config[ironic/auth_plugin]/ensure: created[0m >2018-06-26 09:33:25,958 INFO: [mNotice: /Stage[main]/Nova::Ironic::Common/Nova_config[ironic/username]/ensure: created[0m >2018-06-26 09:33:25,981 INFO: [mNotice: /Stage[main]/Nova::Ironic::Common/Nova_config[ironic/password]/ensure: created[0m >2018-06-26 09:33:26,003 INFO: [mNotice: /Stage[main]/Nova::Ironic::Common/Nova_config[ironic/auth_url]/ensure: created[0m >2018-06-26 09:33:26,235 INFO: [mNotice: /Stage[main]/Nova::Ironic::Common/Nova_config[ironic/project_name]/ensure: created[0m >2018-06-26 09:33:26,255 INFO: [mNotice: /Stage[main]/Nova::Ironic::Common/Nova_config[ironic/api_endpoint]/ensure: created[0m >2018-06-26 09:33:26,305 INFO: [mNotice: /Stage[main]/Nova::Ironic::Common/Nova_config[ironic/user_domain_name]/ensure: created[0m >2018-06-26 09:33:26,328 INFO: [mNotice: /Stage[main]/Nova::Ironic::Common/Nova_config[ironic/project_domain_name]/ensure: created[0m >2018-06-26 09:33:26,353 INFO: [mNotice: /Stage[main]/Nova::Compute::Ironic/Nova_config[DEFAULT/compute_driver]/ensure: created[0m >2018-06-26 09:33:26,626 INFO: [mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created[0m >2018-06-26 09:33:26,652 INFO: [mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created[0m >2018-06-26 09:33:26,677 INFO: [mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created[0m >2018-06-26 09:33:26,701 INFO: [mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created[0m >2018-06-26 09:33:26,722 INFO: [mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created[0m >2018-06-26 09:33:26,745 INFO: [mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created[0m >2018-06-26 09:33:26,768 INFO: [mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created[0m >2018-06-26 09:33:27,027 INFO: [mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created[0m >2018-06-26 09:33:27,049 INFO: [mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created[0m >2018-06-26 09:33:27,070 INFO: [mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created[0m >2018-06-26 09:33:27,091 INFO: [mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created[0m >2018-06-26 09:33:27,112 INFO: [mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created[0m >2018-06-26 09:33:27,133 INFO: [mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created[0m >2018-06-26 09:33:27,154 INFO: [mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created[0m >2018-06-26 09:33:27,394 INFO: [mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created[0m >2018-06-26 09:33:27,417 INFO: [mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created[0m >2018-06-26 09:33:27,440 INFO: [mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created[0m >2018-06-26 09:33:28,243 INFO: [mNotice: /Stage[main]/Main/Augeas[lvm.conf]/returns: executed successfully[0m >2018-06-26 09:33:28,298 INFO: [mNotice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created[0m >2018-06-26 09:33:29,072 INFO: [mNotice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created[0m >2018-06-26 09:33:29,413 INFO: [mNotice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created[0m >2018-06-26 09:33:32,779 INFO: [mNotice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/rpc_response_timeout]/ensure: created[0m >2018-06-26 09:33:32,805 INFO: [mNotice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created[0m >2018-06-26 09:33:32,848 INFO: [mNotice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created[0m >2018-06-26 09:33:33,167 INFO: [mNotice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created[0m >2018-06-26 09:33:33,189 INFO: [mNotice: /Stage[main]/Nova::Policy/Oslo::Policy[nova_config]/Nova_config[oslo_policy/policy_file]/ensure: created[0m >2018-06-26 09:33:33,520 INFO: [mNotice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_uri]/ensure: created[0m >2018-06-26 09:33:33,543 INFO: [mNotice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_type]/ensure: created[0m >2018-06-26 09:33:34,754 INFO: [mNotice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_url]/ensure: created[0m >2018-06-26 09:33:34,777 INFO: [mNotice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/username]/ensure: created[0m >2018-06-26 09:33:34,800 INFO: [mNotice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/password]/ensure: created[0m >2018-06-26 09:33:34,823 INFO: [mNotice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/user_domain_name]/ensure: created[0m >2018-06-26 09:33:34,848 INFO: [mNotice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_name]/ensure: created[0m >2018-06-26 09:33:34,872 INFO: [mNotice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_domain_name]/ensure: created[0m >2018-06-26 09:33:35,188 INFO: [mNotice: /Stage[main]/Nova::Api/Oslo::Middleware[nova_config]/Nova_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created[0m >2018-06-26 09:33:35,189 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,189 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,190 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_config_append]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,190 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_config_append]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,190 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_date_format]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,190 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_date_format]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,190 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_file]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,190 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_file]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,191 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,191 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,191 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/watch_log_file]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,191 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/watch_log_file]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,191 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/use_syslog]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,191 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/use_syslog]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,192 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/use_journal]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,192 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/use_journal]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,192 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/use_json]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,192 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/use_json]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,192 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/syslog_log_facility]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,192 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/syslog_log_facility]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,193 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/use_stderr]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,193 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/use_stderr]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,193 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/logging_context_format_string]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,193 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/logging_context_format_string]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,193 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/logging_default_format_string]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,194 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/logging_default_format_string]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,194 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/logging_debug_format_suffix]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,194 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/logging_debug_format_suffix]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,194 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/logging_exception_prefix]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,194 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/logging_exception_prefix]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,195 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/logging_user_identity_format]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,195 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/logging_user_identity_format]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,195 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/default_log_levels]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,195 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/default_log_levels]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,195 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/publish_errors]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,195 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/publish_errors]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,196 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/instance_format]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,196 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/instance_format]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,196 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/instance_uuid_format]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,196 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/instance_uuid_format]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,196 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/fatal_deprecations]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,196 INFO: [1;33mWarning: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/fatal_deprecations]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,197 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/rpc_response_timeout]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,197 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/rpc_response_timeout]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,197 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,197 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,197 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,198 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,198 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/disable_process_locking]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,198 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/disable_process_locking]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,198 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,198 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,198 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,199 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,199 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,199 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,199 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/topics]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,199 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/topics]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,200 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/amqp_durable_queues]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,200 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/amqp_durable_queues]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,200 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_rate]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,200 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_rate]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,200 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,201 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,201 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/kombu_compression]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,201 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/kombu_compression]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,201 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/kombu_failover_strategy]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,201 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/kombu_failover_strategy]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,202 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/kombu_missing_consumer_retry_timeout]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,202 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/kombu_missing_consumer_retry_timeout]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,202 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/kombu_reconnect_delay]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,202 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/kombu_reconnect_delay]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,202 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_interval_max]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,202 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_interval_max]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,203 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_login_method]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,203 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_login_method]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,203 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_password]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,203 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_password]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,203 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_retry_backoff]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,204 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_retry_backoff]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,204 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_retry_interval]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,204 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_retry_interval]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,204 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_transient_queues_ttl]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,204 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_transient_queues_ttl]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,205 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,205 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,205 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_userid]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,205 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_userid]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,205 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_virtual_host]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,206 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_virtual_host]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,206 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_hosts]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,206 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_hosts]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,206 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_port]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,206 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_port]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,206 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_qos_prefetch_count]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,207 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_qos_prefetch_count]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,207 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_host]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,207 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_host]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,207 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_ha_queues]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,207 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_ha_queues]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,208 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl_ca_file]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,208 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl_ca_file]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,208 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl_cert_file]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,208 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl_cert_file]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,208 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl_key_file]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,209 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl_key_file]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,209 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl_version]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,209 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl_version]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,209 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/addressing_mode]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,209 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/addressing_mode]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,209 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/server_request_prefix]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,210 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/server_request_prefix]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,210 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/broadcast_prefix]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,210 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/broadcast_prefix]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,210 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/group_request_prefix]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,210 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/group_request_prefix]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,211 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/rpc_address_prefix]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,211 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/rpc_address_prefix]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,211 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/notify_address_prefix]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,211 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/notify_address_prefix]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,211 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/multicast_address]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,212 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/multicast_address]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,212 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/unicast_address]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,212 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/unicast_address]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,212 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/anycast_address]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,212 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/anycast_address]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,212 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/default_notification_exchange]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,213 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/default_notification_exchange]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,213 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/default_rpc_exchange]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,213 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/default_rpc_exchange]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,213 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/pre_settled]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,213 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/pre_settled]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,214 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/container_name]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,214 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/container_name]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,214 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/idle_timeout]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,214 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/idle_timeout]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,214 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/trace]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,215 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/trace]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,215 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/ssl]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,215 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/ssl]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,215 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/ssl_ca_file]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,215 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/ssl_ca_file]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,215 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/ssl_cert_file]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,216 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/ssl_cert_file]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,216 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/ssl_key_file]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,216 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/ssl_key_file]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,216 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/ssl_key_password]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,216 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/ssl_key_password]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,217 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/allow_insecure_clients]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,217 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/allow_insecure_clients]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,217 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/sasl_mechanisms]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,217 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/sasl_mechanisms]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,217 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/sasl_config_dir]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,217 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/sasl_config_dir]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,218 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/sasl_config_name]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,218 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/sasl_config_name]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,218 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/sasl_default_realm]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,218 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/sasl_default_realm]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,218 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/username]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,219 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/username]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,219 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/password]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,219 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/password]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,219 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/default_send_timeout]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,219 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/default_send_timeout]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,219 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/default_notify_timeout]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,220 INFO: [1;33mWarning: /Stage[main]/Neutron/Oslo::Messaging::Amqp[neutron_config]/Neutron_config[oslo_messaging_amqp/default_notify_timeout]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,220 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/sqlite_synchronous]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,220 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/sqlite_synchronous]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,220 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/backend]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,220 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/backend]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,221 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/connection]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,221 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/connection]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,221 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/slave_connection]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,221 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/slave_connection]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,221 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/mysql_sql_mode]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,222 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/mysql_sql_mode]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,222 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/idle_timeout]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,222 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/idle_timeout]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,222 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/min_pool_size]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,222 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/min_pool_size]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,222 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/max_pool_size]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,223 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/max_pool_size]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,223 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/max_retries]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,223 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/max_retries]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,223 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/retry_interval]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,223 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/retry_interval]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,223 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/max_overflow]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,224 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/max_overflow]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,224 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/connection_debug]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,224 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/connection_debug]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,224 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/connection_trace]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,224 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/connection_trace]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,224 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/pool_timeout]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,225 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/pool_timeout]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,225 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/use_db_reconnect]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,225 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/use_db_reconnect]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,225 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/db_retry_interval]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,225 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/db_retry_interval]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,225 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/db_inc_retry_interval]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,226 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/db_inc_retry_interval]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,226 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/db_max_retry_interval]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,226 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/db_max_retry_interval]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,226 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/db_max_retries]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,226 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/db_max_retries]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,227 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/use_tpool]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,227 INFO: [1;33mWarning: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/use_tpool]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,227 INFO: [mNotice: /Stage[main]/Neutron::Policy/Oslo::Policy[neutron_config]/Neutron_config[oslo_policy/policy_file]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,227 INFO: [1;33mWarning: /Stage[main]/Neutron::Policy/Oslo::Policy[neutron_config]/Neutron_config[oslo_policy/policy_file]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,227 INFO: [mNotice: /Stage[main]/Neutron::Policy/Oslo::Policy[neutron_config]/Neutron_config[oslo_policy/policy_default_rule]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,227 INFO: [1;33mWarning: /Stage[main]/Neutron::Policy/Oslo::Policy[neutron_config]/Neutron_config[oslo_policy/policy_default_rule]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,228 INFO: [mNotice: /Stage[main]/Neutron::Policy/Oslo::Policy[neutron_config]/Neutron_config[oslo_policy/policy_dirs]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,228 INFO: [1;33mWarning: /Stage[main]/Neutron::Policy/Oslo::Policy[neutron_config]/Neutron_config[oslo_policy/policy_dirs]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,228 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_section]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,228 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_section]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,228 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_uri]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,229 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_uri]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,229 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_type]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,229 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_type]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,229 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_version]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,229 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_version]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,230 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/cache]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,230 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/cache]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,230 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/cafile]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,230 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/cafile]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,230 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/certfile]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,231 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/certfile]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,231 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/check_revocations_for_cached]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,231 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/check_revocations_for_cached]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,231 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/delay_auth_decision]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,231 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/delay_auth_decision]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,232 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/enforce_token_bind]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,232 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/enforce_token_bind]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,232 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/hash_algorithms]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,232 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/hash_algorithms]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,232 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/http_connect_timeout]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,233 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/http_connect_timeout]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,233 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/http_request_max_retries]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,233 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/http_request_max_retries]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,233 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/include_service_catalog]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,233 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/include_service_catalog]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,234 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/keyfile]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,234 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/keyfile]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,234 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcache_pool_conn_get_timeout]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,234 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcache_pool_conn_get_timeout]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,234 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcache_pool_dead_retry]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,235 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcache_pool_dead_retry]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,235 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcache_pool_maxsize]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,235 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcache_pool_maxsize]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,235 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcache_pool_socket_timeout]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,235 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcache_pool_socket_timeout]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,236 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcache_pool_unused_timeout]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,236 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcache_pool_unused_timeout]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,236 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcache_secret_key]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,236 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcache_secret_key]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,237 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcache_security_strategy]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,237 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcache_security_strategy]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,237 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcache_use_advanced_pool]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,237 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcache_use_advanced_pool]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,238 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcached_servers]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,238 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcached_servers]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,238 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/region_name]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,238 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/region_name]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,238 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/token_cache_time]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,239 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/token_cache_time]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,239 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_url]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,239 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_url]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,239 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/username]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,240 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/username]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,240 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/password]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,240 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/password]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,240 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/user_domain_name]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,240 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/user_domain_name]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,241 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_name]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,241 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_name]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,241 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_domain_name]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,241 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_domain_name]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,241 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/insecure]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,242 INFO: [1;33mWarning: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/insecure]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,242 INFO: [mNotice: /Stage[main]/Neutron::Server/Oslo::Middleware[neutron_config]/Neutron_config[oslo_middleware/max_request_body_size]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,242 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Oslo::Middleware[neutron_config]/Neutron_config[oslo_middleware/max_request_body_size]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,242 INFO: [mNotice: /Stage[main]/Neutron::Server/Oslo::Middleware[neutron_config]/Neutron_config[oslo_middleware/enable_proxy_headers_parsing]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,242 INFO: [1;33mWarning: /Stage[main]/Neutron::Server/Oslo::Middleware[neutron_config]/Neutron_config[oslo_middleware/enable_proxy_headers_parsing]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,243 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,243 INFO: [1;33mWarning: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,243 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,243 INFO: [1;33mWarning: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,243 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,244 INFO: [1;33mWarning: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,244 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,244 INFO: [1;33mWarning: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,244 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,244 INFO: [1;33mWarning: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,245 INFO: [mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,245 INFO: [1;33mWarning: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,245 INFO: [mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::db::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,245 INFO: [1;33mWarning: /Stage[main]/Neutron::Deps/Anchor[neutron::db::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,245 INFO: [mNotice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[neutron]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:35,246 INFO: [1;33mWarning: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[neutron]: Skipping because of failed dependencies[0m >2018-06-26 09:33:35,287 INFO: [mNotice: /Stage[main]/Rsync::Server/Xinetd::Service[rsync]/File[/etc/xinetd.d/rsync]/ensure: defined content as '{md5}b968092caa3c158fc9cd784957424201'[0m >2018-06-26 09:33:35,303 INFO: [mNotice: /Stage[main]/Rsync::Server/Concat[/etc/rsyncd.conf]/File[/etc/rsyncd.conf]/content: content changed '{md5}c63fccb45c0dcbbbe17d0f4bdba920ec' to '{md5}be1b50f9cd237db168197ed6d31b8ec9'[0m >2018-06-26 09:33:35,319 INFO: [mNotice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/debug]/ensure: created[0m >2018-06-26 09:33:35,608 INFO: [mNotice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/log_dir]/ensure: created[0m >2018-06-26 09:33:35,700 INFO: [mNotice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/connection]/ensure: created[0m >2018-06-26 09:33:35,795 INFO: [mNotice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_uri]/ensure: created[0m >2018-06-26 09:33:35,804 INFO: [mNotice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_type]/ensure: created[0m >2018-06-26 09:33:36,193 INFO: [mNotice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_url]/ensure: created[0m >2018-06-26 09:33:36,203 INFO: [mNotice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/username]/ensure: created[0m >2018-06-26 09:33:36,213 INFO: [mNotice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/password]/ensure: created[0m >2018-06-26 09:33:36,225 INFO: [mNotice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/user_domain_name]/ensure: created[0m >2018-06-26 09:33:36,235 INFO: [mNotice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_name]/ensure: created[0m >2018-06-26 09:33:36,245 INFO: [mNotice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_domain_name]/ensure: created[0m >2018-06-26 09:33:36,817 INFO: [mNotice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/driver]/ensure: created[0m >2018-06-26 09:33:37,104 INFO: [mNotice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/rpc_response_timeout]/ensure: created[0m >2018-06-26 09:33:37,115 INFO: [mNotice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/transport_url]/ensure: created[0m >2018-06-26 09:33:37,136 INFO: [mNotice: /Stage[main]/Heat/Oslo::Middleware[heat_config]/Heat_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created[0m >2018-06-26 09:33:37,146 INFO: [mNotice: /Stage[main]/Heat::Policy/Oslo::Policy[heat_config]/Heat_config[oslo_policy/policy_file]/ensure: created[0m >2018-06-26 09:33:37,161 INFO: [mNotice: /Stage[main]/Apache::Mod::Headers/Apache::Mod[headers]/File[headers.load]/ensure: defined content as '{md5}96094c96352002c43ada5bdf8650ff38'[0m >2018-06-26 09:33:37,172 INFO: [mNotice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/allowed_origin]/ensure: created[0m >2018-06-26 09:33:37,187 INFO: [mNotice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/expose_headers]/ensure: created[0m >2018-06-26 09:33:37,197 INFO: [mNotice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/max_age]/ensure: created[0m >2018-06-26 09:33:37,212 INFO: [mNotice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/allow_headers]/ensure: created[0m >2018-06-26 09:33:37,213 INFO: [mNotice: /Stage[main]/Heat::Deps/Anchor[heat::config::end]: Triggered 'refresh' from 49 events[0m >2018-06-26 09:33:37,240 INFO: [mNotice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]/ensure: created[0m >2018-06-26 09:33:37,262 INFO: [mNotice: /Stage[main]/Nova::Cors/Oslo::Cors[nova_config]/Nova_config[cors/allowed_origin]/ensure: created[0m >2018-06-26 09:33:37,301 INFO: [mNotice: /Stage[main]/Nova::Cors/Oslo::Cors[nova_config]/Nova_config[cors/expose_headers]/ensure: created[0m >2018-06-26 09:33:37,321 INFO: [mNotice: /Stage[main]/Nova::Cors/Oslo::Cors[nova_config]/Nova_config[cors/max_age]/ensure: created[0m >2018-06-26 09:33:37,676 INFO: [mNotice: /Stage[main]/Nova::Cors/Oslo::Cors[nova_config]/Nova_config[cors/allow_methods]/ensure: created[0m >2018-06-26 09:33:37,697 INFO: [mNotice: /Stage[main]/Nova::Cors/Oslo::Cors[nova_config]/Nova_config[cors/allow_headers]/ensure: created[0m >2018-06-26 09:33:37,698 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::config::end]: Triggered 'refresh' from 104 events[0m >2018-06-26 09:33:37,723 INFO: [mNotice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]/ensure: created[0m >2018-06-26 09:33:37,744 INFO: [mNotice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Mysql_database[nova_cell0]/ensure: created[0m >2018-06-26 09:33:37,764 INFO: [mNotice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]/ensure: created[0m >2018-06-26 09:33:37,783 INFO: [mNotice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]/ensure: created[0m >2018-06-26 09:33:37,802 INFO: [mNotice: /Stage[main]/Ironic::Logging/Oslo::Log[ironic_config]/Ironic_config[DEFAULT/debug]/ensure: created[0m >2018-06-26 09:33:37,843 INFO: [mNotice: /Stage[main]/Ironic::Logging/Oslo::Log[ironic_config]/Ironic_config[DEFAULT/log_dir]/ensure: created[0m >2018-06-26 09:33:38,333 INFO: [mNotice: /Stage[main]/Ironic::Db/Oslo::Db[ironic_config]/Ironic_config[database/connection]/ensure: created[0m >2018-06-26 09:33:38,479 INFO: [mNotice: /Stage[main]/Ironic/Oslo::Messaging::Default[ironic_config]/Ironic_config[DEFAULT/rpc_response_timeout]/ensure: created[0m >2018-06-26 09:33:38,495 INFO: [mNotice: /Stage[main]/Ironic/Oslo::Messaging::Default[ironic_config]/Ironic_config[DEFAULT/transport_url]/ensure: created[0m >2018-06-26 09:33:39,558 INFO: [mNotice: /Stage[main]/Ironic::Policy/Oslo::Policy[ironic_config]/Ironic_config[oslo_policy/policy_file]/ensure: created[0m >2018-06-26 09:33:39,597 INFO: [mNotice: /Stage[main]/Ironic::Api::Authtoken/Keystone::Resource::Authtoken[ironic_config]/Ironic_config[keystone_authtoken/auth_uri]/ensure: created[0m >2018-06-26 09:33:39,929 INFO: [mNotice: /Stage[main]/Ironic::Api::Authtoken/Keystone::Resource::Authtoken[ironic_config]/Ironic_config[keystone_authtoken/auth_type]/ensure: created[0m >2018-06-26 09:33:40,122 INFO: [mNotice: /Stage[main]/Ironic::Api::Authtoken/Keystone::Resource::Authtoken[ironic_config]/Ironic_config[keystone_authtoken/auth_url]/ensure: created[0m >2018-06-26 09:33:40,138 INFO: [mNotice: /Stage[main]/Ironic::Api::Authtoken/Keystone::Resource::Authtoken[ironic_config]/Ironic_config[keystone_authtoken/username]/ensure: created[0m >2018-06-26 09:33:40,153 INFO: [mNotice: /Stage[main]/Ironic::Api::Authtoken/Keystone::Resource::Authtoken[ironic_config]/Ironic_config[keystone_authtoken/password]/ensure: created[0m >2018-06-26 09:33:40,480 INFO: [mNotice: /Stage[main]/Ironic::Api::Authtoken/Keystone::Resource::Authtoken[ironic_config]/Ironic_config[keystone_authtoken/user_domain_name]/ensure: created[0m >2018-06-26 09:33:40,494 INFO: [mNotice: /Stage[main]/Ironic::Api::Authtoken/Keystone::Resource::Authtoken[ironic_config]/Ironic_config[keystone_authtoken/project_name]/ensure: created[0m >2018-06-26 09:33:40,509 INFO: [mNotice: /Stage[main]/Ironic::Api::Authtoken/Keystone::Resource::Authtoken[ironic_config]/Ironic_config[keystone_authtoken/project_domain_name]/ensure: created[0m >2018-06-26 09:33:40,535 INFO: [mNotice: /Stage[main]/Ironic::Wsgi::Apache/Openstacklib::Wsgi::Apache[ironic_wsgi]/File[/var/www/cgi-bin/ironic]/ensure: created[0m >2018-06-26 09:33:40,539 INFO: [mNotice: /Stage[main]/Ironic::Wsgi::Apache/Openstacklib::Wsgi::Apache[ironic_wsgi]/File[ironic_wsgi]/ensure: defined content as '{md5}1d56c8d9da9a51b60ed54ef55cb43c99'[0m >2018-06-26 09:33:40,554 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Interfaces/Ironic::Drivers::Hardware_interface[boot]/Ironic_config[DEFAULT/enabled_boot_interfaces]/ensure: created[0m >2018-06-26 09:33:40,577 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Interfaces/Ironic::Drivers::Hardware_interface[console]/Ironic_config[DEFAULT/enabled_console_interfaces]/ensure: created[0m >2018-06-26 09:33:40,599 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Interfaces/Ironic::Drivers::Hardware_interface[deploy]/Ironic_config[DEFAULT/enabled_deploy_interfaces]/ensure: created[0m >2018-06-26 09:33:40,622 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Interfaces/Ironic::Drivers::Hardware_interface[inspect]/Ironic_config[DEFAULT/enabled_inspect_interfaces]/ensure: created[0m >2018-06-26 09:33:40,636 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Interfaces/Ironic::Drivers::Hardware_interface[inspect]/Ironic_config[DEFAULT/default_inspect_interface]/ensure: created[0m >2018-06-26 09:33:40,652 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Interfaces/Ironic::Drivers::Hardware_interface[management]/Ironic_config[DEFAULT/enabled_management_interfaces]/ensure: created[0m >2018-06-26 09:33:40,690 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Interfaces/Ironic::Drivers::Hardware_interface[power]/Ironic_config[DEFAULT/enabled_power_interfaces]/ensure: created[0m >2018-06-26 09:33:40,714 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Interfaces/Ironic::Drivers::Hardware_interface[raid]/Ironic_config[DEFAULT/enabled_raid_interfaces]/ensure: created[0m >2018-06-26 09:33:41,090 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Interfaces/Ironic::Drivers::Hardware_interface[vendor]/Ironic_config[DEFAULT/enabled_vendor_interfaces]/ensure: created[0m >2018-06-26 09:33:41,105 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Logging/Oslo::Log[ironic_inspector_config]/Ironic_inspector_config[DEFAULT/debug]/ensure: created[0m >2018-06-26 09:33:41,119 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Logging/Oslo::Log[ironic_inspector_config]/Ironic_inspector_config[DEFAULT/log_dir]/ensure: created[0m >2018-06-26 09:33:41,174 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Db/Oslo::Db[ironic_inspector_config]/Ironic_inspector_config[database/connection]/ensure: created[0m >2018-06-26 09:33:41,231 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Authtoken/Keystone::Resource::Authtoken[ironic_inspector_config]/Ironic_inspector_config[keystone_authtoken/auth_uri]/ensure: created[0m >2018-06-26 09:33:41,236 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Authtoken/Keystone::Resource::Authtoken[ironic_inspector_config]/Ironic_inspector_config[keystone_authtoken/auth_type]/ensure: created[0m >2018-06-26 09:33:41,622 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Authtoken/Keystone::Resource::Authtoken[ironic_inspector_config]/Ironic_inspector_config[keystone_authtoken/auth_url]/ensure: created[0m >2018-06-26 09:33:41,627 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Authtoken/Keystone::Resource::Authtoken[ironic_inspector_config]/Ironic_inspector_config[keystone_authtoken/username]/ensure: created[0m >2018-06-26 09:33:41,633 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Authtoken/Keystone::Resource::Authtoken[ironic_inspector_config]/Ironic_inspector_config[keystone_authtoken/password]/ensure: created[0m >2018-06-26 09:33:41,638 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Authtoken/Keystone::Resource::Authtoken[ironic_inspector_config]/Ironic_inspector_config[keystone_authtoken/user_domain_name]/ensure: created[0m >2018-06-26 09:33:41,643 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Authtoken/Keystone::Resource::Authtoken[ironic_inspector_config]/Ironic_inspector_config[keystone_authtoken/project_name]/ensure: created[0m >2018-06-26 09:33:41,648 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Authtoken/Keystone::Resource::Authtoken[ironic_inspector_config]/Ironic_inspector_config[keystone_authtoken/project_domain_name]/ensure: created[0m >2018-06-26 09:33:41,656 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Cors/Oslo::Cors[ironic_inspector_config]/Ironic_inspector_config[cors/allowed_origin]/ensure: created[0m >2018-06-26 09:33:41,664 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Cors/Oslo::Cors[ironic_inspector_config]/Ironic_inspector_config[cors/expose_headers]/ensure: created[0m >2018-06-26 09:33:41,669 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Cors/Oslo::Cors[ironic_inspector_config]/Ironic_inspector_config[cors/max_age]/ensure: created[0m >2018-06-26 09:33:41,674 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Cors/Oslo::Cors[ironic_inspector_config]/Ironic_inspector_config[cors/allow_methods]/ensure: created[0m >2018-06-26 09:33:41,679 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Cors/Oslo::Cors[ironic_inspector_config]/Ironic_inspector_config[cors/allow_headers]/ensure: created[0m >2018-06-26 09:33:41,680 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic-inspector::config::end]: Triggered 'refresh' from 43 events[0m >2018-06-26 09:33:41,685 INFO: [mNotice: /Stage[main]/Ironic::Pxe/Xinetd::Service[tftp]/File[/etc/xinetd.d/tftp]/content: content changed '{md5}678efd3887a91cd4e0955aa6c8b12257' to '{md5}a793ddab9e0737a36b489de2c1d3f084'[0m >2018-06-26 09:33:41,910 INFO: [mNotice: /Stage[main]/Xinetd/Service[xinetd]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:33:41,928 INFO: [mNotice: /Stage[main]/Ironic::Pxe/Ironic::Pxe::Tftpboot_file[pxelinux.0]/File[/tftpboot/pxelinux.0]/ensure: defined content as '{md5}3b078292686534c3b81baf513c8be233'[0m >2018-06-26 09:33:41,938 INFO: [mNotice: /Stage[main]/Ironic::Pxe/Ironic::Pxe::Tftpboot_file[chain.c32]/File[/tftpboot/chain.c32]/ensure: defined content as '{md5}af5c5fd5623d1bc2221f59eab51c9b41'[0m >2018-06-26 09:33:41,955 INFO: [mNotice: /Stage[main]/Ironic::Cors/Oslo::Cors[ironic_config]/Ironic_config[cors/allowed_origin]/ensure: created[0m >2018-06-26 09:33:41,978 INFO: [mNotice: /Stage[main]/Ironic::Cors/Oslo::Cors[ironic_config]/Ironic_config[cors/expose_headers]/ensure: created[0m >2018-06-26 09:33:41,993 INFO: [mNotice: /Stage[main]/Ironic::Cors/Oslo::Cors[ironic_config]/Ironic_config[cors/max_age]/ensure: created[0m >2018-06-26 09:33:42,008 INFO: [mNotice: /Stage[main]/Ironic::Cors/Oslo::Cors[ironic_config]/Ironic_config[cors/allow_methods]/ensure: created[0m >2018-06-26 09:33:42,023 INFO: [mNotice: /Stage[main]/Ironic::Cors/Oslo::Cors[ironic_config]/Ironic_config[cors/allow_headers]/ensure: created[0m >2018-06-26 09:33:42,024 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::config::end]: Triggered 'refresh' from 95 events[0m >2018-06-26 09:33:42,051 INFO: [mNotice: /Stage[main]/Ironic::Db::Mysql/Openstacklib::Db::Mysql[ironic]/Mysql_database[ironic]/ensure: created[0m >2018-06-26 09:33:42,056 INFO: [mNotice: /Stage[main]/Mistral::Db/Oslo::Db[mistral_config]/Mistral_config[database/connection]/ensure: created[0m >2018-06-26 09:33:42,067 INFO: [mNotice: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:42,067 INFO: [1;33mWarning: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:33:42,067 INFO: [mNotice: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:42,067 INFO: [1;33mWarning: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]: Skipping because of failed dependencies[0m >2018-06-26 09:33:42,068 INFO: [mNotice: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:42,068 INFO: [1;33mWarning: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:33:42,069 INFO: [mNotice: /Stage[main]/Mistral::Logging/Oslo::Log[mistral_config]/Mistral_config[DEFAULT/debug]/ensure: created[0m >2018-06-26 09:33:42,072 INFO: [mNotice: /Stage[main]/Mistral::Logging/Oslo::Log[mistral_config]/Mistral_config[DEFAULT/log_dir]/ensure: created[0m >2018-06-26 09:33:42,084 INFO: [mNotice: /Stage[main]/Mistral::Keystone::Authtoken/Keystone::Resource::Authtoken[mistral_config]/Mistral_config[keystone_authtoken/auth_uri]/ensure: created[0m >2018-06-26 09:33:42,085 INFO: [mNotice: /Stage[main]/Mistral::Keystone::Authtoken/Keystone::Resource::Authtoken[mistral_config]/Mistral_config[keystone_authtoken/auth_type]/ensure: created[0m >2018-06-26 09:33:42,099 INFO: [mNotice: /Stage[main]/Mistral::Keystone::Authtoken/Keystone::Resource::Authtoken[mistral_config]/Mistral_config[keystone_authtoken/auth_url]/ensure: created[0m >2018-06-26 09:33:42,100 INFO: [mNotice: /Stage[main]/Mistral::Keystone::Authtoken/Keystone::Resource::Authtoken[mistral_config]/Mistral_config[keystone_authtoken/username]/ensure: created[0m >2018-06-26 09:33:42,101 INFO: [mNotice: /Stage[main]/Mistral::Keystone::Authtoken/Keystone::Resource::Authtoken[mistral_config]/Mistral_config[keystone_authtoken/password]/ensure: created[0m >2018-06-26 09:33:42,462 INFO: [mNotice: /Stage[main]/Mistral::Keystone::Authtoken/Keystone::Resource::Authtoken[mistral_config]/Mistral_config[keystone_authtoken/user_domain_name]/ensure: created[0m >2018-06-26 09:33:42,464 INFO: [mNotice: /Stage[main]/Mistral::Keystone::Authtoken/Keystone::Resource::Authtoken[mistral_config]/Mistral_config[keystone_authtoken/project_name]/ensure: created[0m >2018-06-26 09:33:42,465 INFO: [mNotice: /Stage[main]/Mistral::Keystone::Authtoken/Keystone::Resource::Authtoken[mistral_config]/Mistral_config[keystone_authtoken/project_domain_name]/ensure: created[0m >2018-06-26 09:33:42,467 INFO: [mNotice: /Stage[main]/Mistral/Oslo::Messaging::Default[mistral_config]/Mistral_config[DEFAULT/rpc_response_timeout]/ensure: created[0m >2018-06-26 09:33:42,475 INFO: [mNotice: /Stage[main]/Mistral/Oslo::Messaging::Rabbit[mistral_config]/Mistral_config[oslo_messaging_rabbit/rabbit_password]/ensure: created[0m >2018-06-26 09:33:42,478 INFO: [mNotice: /Stage[main]/Mistral/Oslo::Messaging::Rabbit[mistral_config]/Mistral_config[oslo_messaging_rabbit/rabbit_userid]/ensure: created[0m >2018-06-26 09:33:42,481 INFO: [mNotice: /Stage[main]/Mistral/Oslo::Messaging::Rabbit[mistral_config]/Mistral_config[oslo_messaging_rabbit/rabbit_host]/ensure: created[0m >2018-06-26 09:33:42,485 INFO: [mNotice: /Stage[main]/Mistral::Policy/Oslo::Policy[mistral_config]/Mistral_config[oslo_policy/policy_file]/ensure: created[0m >2018-06-26 09:33:42,489 INFO: [mNotice: /Stage[main]/Mistral::Cors/Oslo::Cors[mistral_config]/Mistral_config[cors/allowed_origin]/ensure: created[0m >2018-06-26 09:33:42,490 INFO: [mNotice: /Stage[main]/Mistral::Cors/Oslo::Cors[mistral_config]/Mistral_config[cors/expose_headers]/ensure: created[0m >2018-06-26 09:33:42,492 INFO: [mNotice: /Stage[main]/Mistral::Cors/Oslo::Cors[mistral_config]/Mistral_config[cors/allow_headers]/ensure: created[0m >2018-06-26 09:33:42,493 INFO: [mNotice: /Stage[main]/Mistral::Deps/Anchor[mistral::config::end]: Triggered 'refresh' from 25 events[0m >2018-06-26 09:33:42,517 INFO: [mNotice: /Stage[main]/Mistral::Db::Mysql/Openstacklib::Db::Mysql[mistral]/Mysql_database[mistral]/ensure: created[0m >2018-06-26 09:33:42,522 INFO: [mNotice: /Stage[main]/Apache::Mod::Proxy/Apache::Mod[proxy]/File[proxy.load]/ensure: defined content as '{md5}fe26a0a70f572eb256a3c6c183a62223'[0m >2018-06-26 09:33:42,526 INFO: [mNotice: /Stage[main]/Apache::Mod::Proxy_http/Apache::Mod[proxy_http]/File[proxy_http.load]/ensure: defined content as '{md5}0329b852b123a914fca8b072de61f913'[0m >2018-06-26 09:33:42,530 INFO: [mNotice: /Stage[main]/Apache::Mod::Proxy_wstunnel/Apache::Mod[proxy_wstunnel]/File[proxy_wstunnel.load]/ensure: defined content as '{md5}8036815f495618f4dde9d68796622e1c'[0m >2018-06-26 09:33:43,008 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed[0m >2018-06-26 09:33:43,009 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed[0m >2018-06-26 09:33:43,011 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-lua.conf]/ensure: removed[0m >2018-06-26 09:33:43,012 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed[0m >2018-06-26 09:33:43,014 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed[0m >2018-06-26 09:33:43,015 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed[0m >2018-06-26 09:33:43,017 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed[0m >2018-06-26 09:33:43,018 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-session.conf]/ensure: removed[0m >2018-06-26 09:33:43,020 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_gssapi.conf]/ensure: removed[0m >2018-06-26 09:33:43,021 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-nss.conf]/ensure: removed[0m >2018-06-26 09:33:43,042 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi.conf]/ensure: removed[0m >2018-06-26 09:33:43,044 INFO: [mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/55-lookup_identity.conf]/ensure: removed[0m >2018-06-26 09:33:43,051 INFO: [mNotice: /Stage[main]/Tripleo::Ui/File[/var/www/openstack-tripleo-ui/dist/tripleo_ui_config.js]/ensure: defined content as '{md5}ea42c84d592be2f9b1f710e997f378eb'[0m >2018-06-26 09:33:43,077 INFO: [mNotice: /Stage[main]/Zaqar::Logging/Oslo::Log[zaqar_config]/Zaqar_config[DEFAULT/log_dir]/ensure: created[0m >2018-06-26 09:33:43,555 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Authtoken/Keystone::Resource::Authtoken[zaqar_config]/Zaqar_config[keystone_authtoken/auth_uri]/ensure: created[0m >2018-06-26 09:33:43,563 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Authtoken/Keystone::Resource::Authtoken[zaqar_config]/Zaqar_config[keystone_authtoken/auth_type]/ensure: created[0m >2018-06-26 09:33:43,665 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Authtoken/Keystone::Resource::Authtoken[zaqar_config]/Zaqar_config[keystone_authtoken/auth_url]/ensure: created[0m >2018-06-26 09:33:43,673 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Authtoken/Keystone::Resource::Authtoken[zaqar_config]/Zaqar_config[keystone_authtoken/username]/ensure: created[0m >2018-06-26 09:33:43,681 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Authtoken/Keystone::Resource::Authtoken[zaqar_config]/Zaqar_config[keystone_authtoken/password]/ensure: created[0m >2018-06-26 09:33:43,689 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Authtoken/Keystone::Resource::Authtoken[zaqar_config]/Zaqar_config[keystone_authtoken/user_domain_name]/ensure: created[0m >2018-06-26 09:33:43,697 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Authtoken/Keystone::Resource::Authtoken[zaqar_config]/Zaqar_config[keystone_authtoken/project_name]/ensure: created[0m >2018-06-26 09:33:43,705 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Authtoken/Keystone::Resource::Authtoken[zaqar_config]/Zaqar_config[keystone_authtoken/project_domain_name]/ensure: created[0m >2018-06-26 09:33:43,717 INFO: [mNotice: /Stage[main]/Zaqar::Policy/Oslo::Policy[zaqar_config]/Zaqar_config[oslo_policy/policy_file]/ensure: created[0m >2018-06-26 09:33:43,726 INFO: [mNotice: /Stage[main]/Zaqar::Deps/Anchor[zaqar::config::end]: Triggered 'refresh' from 25 events[0m >2018-06-26 09:33:43,753 INFO: [mNotice: /Stage[main]/Zaqar::Db::Mysql/Openstacklib::Db::Mysql[zaqar]/Mysql_database[zaqar]/ensure: created[0m >2018-06-26 09:33:43,759 INFO: [mNotice: /Stage[main]/Zaqar::Wsgi::Apache/Openstacklib::Wsgi::Apache[zaqar_wsgi]/File[/var/www/cgi-bin/zaqar]/ensure: created[0m >2018-06-26 09:33:44,114 INFO: [mNotice: /Stage[main]/Zaqar::Wsgi::Apache/Openstacklib::Wsgi::Apache[zaqar_wsgi]/File[zaqar_wsgi]/ensure: defined content as '{md5}3a0f81ec944ad0c68f7db4b58b1f72d6'[0m >2018-06-26 09:33:44,118 INFO: [mNotice: /Stage[main]/Main/Zaqar::Server_instance[1]/File[/etc/zaqar/1.conf]/ensure: defined content as '{md5}7750571c15d53265e703cc60ea7938af'[0m >2018-06-26 09:33:44,131 INFO: [mNotice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}40d961cd3154f0439fcac1a50bd77b96' to '{md5}5822f16d9a2f8e603f9a16a083e4fcbe'[0m >2018-06-26 09:33:44,235 INFO: [mNotice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events[0m >2018-06-26 09:33:44,305 INFO: [mNotice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_user[glance@%]/ensure: created[0m >2018-06-26 09:33:44,390 INFO: [mNotice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_grant[glance@%/glance.*]/ensure: created[0m >2018-06-26 09:33:44,473 INFO: [mNotice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_192.0.3.1]/Mysql_user[glance@192.0.3.1]/ensure: created[0m >2018-06-26 09:33:44,495 INFO: [mNotice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_192.0.3.1]/Mysql_grant[glance@192.0.3.1/glance.*]/ensure: created[0m >2018-06-26 09:33:44,517 INFO: [mNotice: /Stage[main]/Glance::Deps/Anchor[glance::db::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:33:44,517 INFO: [mNotice: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:44,518 INFO: [1;33mWarning: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:33:44,518 INFO: [mNotice: /Stage[main]/Glance::Db::Sync/Exec[glance-manage db_sync]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:44,518 INFO: [1;33mWarning: /Stage[main]/Glance::Db::Sync/Exec[glance-manage db_sync]: Skipping because of failed dependencies[0m >2018-06-26 09:33:44,518 INFO: [mNotice: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::end]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:44,519 INFO: [1;33mWarning: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::end]: Skipping because of failed dependencies[0m >2018-06-26 09:33:44,519 INFO: [mNotice: /Stage[main]/Glance::Db::Metadefs/Exec[glance-manage db_load_metadefs]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:44,519 INFO: [1;33mWarning: /Stage[main]/Glance::Db::Metadefs/Exec[glance-manage db_load_metadefs]: Skipping because of failed dependencies[0m >2018-06-26 09:33:44,583 INFO: [mNotice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_user[nova@%]/ensure: created[0m >2018-06-26 09:33:44,605 INFO: [mNotice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_grant[nova@%/nova.*]/ensure: created[0m >2018-06-26 09:33:44,690 INFO: [mNotice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_192.0.3.1]/Mysql_user[nova@192.0.3.1]/ensure: created[0m >2018-06-26 09:33:44,715 INFO: [mNotice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_192.0.3.1]/Mysql_grant[nova@192.0.3.1/nova.*]/ensure: created[0m >2018-06-26 09:33:44,760 INFO: [mNotice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_%]/Mysql_grant[nova@%/nova_cell0.*]/ensure: created[0m >2018-06-26 09:33:44,803 INFO: [mNotice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_192.0.3.1]/Mysql_grant[nova@192.0.3.1/nova_cell0.*]/ensure: created[0m >2018-06-26 09:33:44,888 INFO: [mNotice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_user[nova_api@%]/ensure: created[0m >2018-06-26 09:33:44,911 INFO: [mNotice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_grant[nova_api@%/nova_api.*]/ensure: created[0m >2018-06-26 09:33:45,002 INFO: [mNotice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_192.0.3.1]/Mysql_user[nova_api@192.0.3.1]/ensure: created[0m >2018-06-26 09:33:45,025 INFO: [mNotice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_192.0.3.1]/Mysql_grant[nova_api@192.0.3.1/nova_api.*]/ensure: created[0m >2018-06-26 09:33:45,111 INFO: [mNotice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_user[nova_placement@%]/ensure: created[0m >2018-06-26 09:33:45,133 INFO: [mNotice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_grant[nova_placement@%/nova_placement.*]/ensure: created[0m >2018-06-26 09:33:45,218 INFO: [mNotice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_192.0.3.1]/Mysql_user[nova_placement@192.0.3.1]/ensure: created[0m >2018-06-26 09:33:45,241 INFO: [mNotice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_192.0.3.1]/Mysql_grant[nova_placement@192.0.3.1/nova_placement.*]/ensure: created[0m >2018-06-26 09:33:45,264 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::db::end]: Triggered 'refresh' from 3 events[0m >2018-06-26 09:33:45,265 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::begin]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:33:53,512 INFO: [mNotice: /Stage[main]/Nova::Db::Sync_api/Exec[nova-db-sync-api]/returns: executed successfully[0m >2018-06-26 09:33:56,335 INFO: [mNotice: /Stage[main]/Nova::Db::Sync_api/Exec[nova-db-sync-api]: Triggered 'refresh' from 4 events[0m >2018-06-26 09:33:56,336 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::end]: Triggered 'refresh' from 2 events[0m >2018-06-26 09:33:56,337 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:33:56,338 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:33:59,096 INFO: [mNotice: /Stage[main]/Nova::Cell_v2::Map_cell0/Exec[nova-cell_v2-map_cell0]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:33:59,097 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:33:59,097 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:59,097 INFO: [1;33mWarning: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:33:59,098 INFO: [mNotice: /Stage[main]/Nova::Db::Sync/Exec[nova-db-sync]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:59,098 INFO: [1;33mWarning: /Stage[main]/Nova::Db::Sync/Exec[nova-db-sync]: Skipping because of failed dependencies[0m >2018-06-26 09:33:59,098 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::end]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:59,098 INFO: [1;33mWarning: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::end]: Skipping because of failed dependencies[0m >2018-06-26 09:33:59,098 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::service::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:33:59,099 INFO: [1;33mWarning: /Stage[main]/Nova::Deps/Anchor[nova::service::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,091 INFO: [mNotice: /Stage[main]/Nova::Cell_v2::Simple_setup/Nova_cell_v2[default]/ensure: created[0m >2018-06-26 09:34:05,092 INFO: [mNotice: /Stage[main]/Nova::Cron::Archive_deleted_rows/Cron[nova-manage db archive_deleted_rows]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,092 INFO: [1;33mWarning: /Stage[main]/Nova::Cron::Archive_deleted_rows/Cron[nova-manage db archive_deleted_rows]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,093 INFO: [mNotice: /Stage[main]/Nova::Conductor/Nova::Generic_service[conductor]/Service[nova-conductor]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,093 INFO: [1;33mWarning: /Stage[main]/Nova::Conductor/Nova::Generic_service[conductor]/Service[nova-conductor]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,093 INFO: [mNotice: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Service[nova-scheduler]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,094 INFO: [1;33mWarning: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Service[nova-scheduler]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,094 INFO: [mNotice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[neutron_%]/Mysql_user[neutron@%]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,094 INFO: [1;33mWarning: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[neutron_%]/Mysql_user[neutron@%]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,094 INFO: [mNotice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[neutron_%]/Mysql_grant[neutron@%/neutron.*]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,095 INFO: [1;33mWarning: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[neutron_%]/Mysql_grant[neutron@%/neutron.*]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,095 INFO: [mNotice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[neutron_192.0.3.1]/Mysql_user[neutron@192.0.3.1]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,095 INFO: [1;33mWarning: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[neutron_192.0.3.1]/Mysql_user[neutron@192.0.3.1]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,096 INFO: [mNotice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[neutron_192.0.3.1]/Mysql_grant[neutron@192.0.3.1/neutron.*]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,096 INFO: [1;33mWarning: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[neutron_192.0.3.1]/Mysql_grant[neutron@192.0.3.1/neutron.*]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,096 INFO: [mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,096 INFO: [1;33mWarning: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,096 INFO: [mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,097 INFO: [1;33mWarning: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,097 INFO: [mNotice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,097 INFO: [1;33mWarning: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,097 INFO: [mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,098 INFO: [1;33mWarning: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,098 INFO: [mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::service::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,098 INFO: [1;33mWarning: /Stage[main]/Neutron::Deps/Anchor[neutron::service::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,098 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,098 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,099 INFO: [mNotice: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,099 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,099 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,100 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,421 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-destroy-patch-ports-service]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,421 INFO: [1;33mWarning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-destroy-patch-ports-service]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,491 INFO: [mNotice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_user[heat@%]/ensure: created[0m >2018-06-26 09:34:05,514 INFO: [mNotice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_grant[heat@%/heat.*]/ensure: created[0m >2018-06-26 09:34:05,602 INFO: [mNotice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_192.0.3.1]/Mysql_user[heat@192.0.3.1]/ensure: created[0m >2018-06-26 09:34:05,625 INFO: [mNotice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_192.0.3.1]/Mysql_grant[heat@192.0.3.1/heat.*]/ensure: created[0m >2018-06-26 09:34:05,649 INFO: [mNotice: /Stage[main]/Heat::Deps/Anchor[heat::db::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:34:05,649 INFO: [mNotice: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,650 INFO: [1;33mWarning: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,650 INFO: [mNotice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,650 INFO: [1;33mWarning: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,650 INFO: [mNotice: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::end]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,651 INFO: [1;33mWarning: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::end]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,651 INFO: [mNotice: /Stage[main]/Heat::Deps/Anchor[heat::service::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,651 INFO: [1;33mWarning: /Stage[main]/Heat::Deps/Anchor[heat::service::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,652 INFO: [mNotice: /Stage[main]/Heat::Api/Service[heat-api]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,652 INFO: [1;33mWarning: /Stage[main]/Heat::Api/Service[heat-api]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,652 INFO: [mNotice: /Stage[main]/Heat::Api_cfn/Service[heat-api-cfn]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,652 INFO: [1;33mWarning: /Stage[main]/Heat::Api_cfn/Service[heat-api-cfn]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,653 INFO: [mNotice: /Stage[main]/Heat::Engine/Service[heat-engine]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,653 INFO: [1;33mWarning: /Stage[main]/Heat::Engine/Service[heat-engine]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,719 INFO: [mNotice: /Stage[main]/Ironic::Db::Mysql/Openstacklib::Db::Mysql[ironic]/Openstacklib::Db::Mysql::Host_access[ironic_%]/Mysql_user[ironic@%]/ensure: created[0m >2018-06-26 09:34:05,742 INFO: [mNotice: /Stage[main]/Ironic::Db::Mysql/Openstacklib::Db::Mysql[ironic]/Openstacklib::Db::Mysql::Host_access[ironic_%]/Mysql_grant[ironic@%/ironic.*]/ensure: created[0m >2018-06-26 09:34:05,836 INFO: [mNotice: /Stage[main]/Ironic::Db::Mysql/Openstacklib::Db::Mysql[ironic]/Openstacklib::Db::Mysql::Host_access[ironic_192.0.3.1]/Mysql_user[ironic@192.0.3.1]/ensure: created[0m >2018-06-26 09:34:05,861 INFO: [mNotice: /Stage[main]/Ironic::Db::Mysql/Openstacklib::Db::Mysql[ironic]/Openstacklib::Db::Mysql::Host_access[ironic_192.0.3.1]/Mysql_grant[ironic@192.0.3.1/ironic.*]/ensure: created[0m >2018-06-26 09:34:05,885 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::db::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:34:05,886 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::dbsync::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,886 INFO: [1;33mWarning: /Stage[main]/Ironic::Deps/Anchor[ironic::dbsync::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,886 INFO: [mNotice: /Stage[main]/Ironic::Db::Sync/Exec[ironic-dbsync]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,887 INFO: [1;33mWarning: /Stage[main]/Ironic::Db::Sync/Exec[ironic-dbsync]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,887 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::dbsync::end]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,887 INFO: [1;33mWarning: /Stage[main]/Ironic::Deps/Anchor[ironic::dbsync::end]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,887 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::db_online_data_migrations::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,888 INFO: [1;33mWarning: /Stage[main]/Ironic::Deps/Anchor[ironic::db_online_data_migrations::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,888 INFO: [mNotice: /Stage[main]/Ironic::Db::Online_data_migrations/Exec[ironic-db-online-data-migrations]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,888 INFO: [1;33mWarning: /Stage[main]/Ironic::Db::Online_data_migrations/Exec[ironic-db-online-data-migrations]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,888 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::db_online_data_migrations::end]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,889 INFO: [1;33mWarning: /Stage[main]/Ironic::Deps/Anchor[ironic::db_online_data_migrations::end]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,889 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::service::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,889 INFO: [1;33mWarning: /Stage[main]/Ironic::Deps/Anchor[ironic::service::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,890 INFO: [mNotice: /Stage[main]/Ironic::Api/Service[ironic-api]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,890 INFO: [1;33mWarning: /Stage[main]/Ironic::Api/Service[ironic-api]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,890 INFO: [mNotice: /Stage[main]/Ironic::Conductor/Service[ironic-conductor]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,890 INFO: [1;33mWarning: /Stage[main]/Ironic::Conductor/Service[ironic-conductor]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,891 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::service::end]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:05,891 INFO: [1;33mWarning: /Stage[main]/Ironic::Deps/Anchor[ironic::service::end]: Skipping because of failed dependencies[0m >2018-06-26 09:34:05,960 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Db::Mysql/Openstacklib::Db::Mysql[ironic-inspector]/Openstacklib::Db::Mysql::Host_access[ironic-inspector_%]/Mysql_user[ironic-inspector@%]/ensure: created[0m >2018-06-26 09:34:05,983 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Db::Mysql/Openstacklib::Db::Mysql[ironic-inspector]/Openstacklib::Db::Mysql::Host_access[ironic-inspector_%]/Mysql_grant[ironic-inspector@%/ironic-inspector.*]/ensure: created[0m >2018-06-26 09:34:06,074 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Db::Mysql/Openstacklib::Db::Mysql[ironic-inspector]/Openstacklib::Db::Mysql::Host_access[ironic-inspector_192.0.3.1]/Mysql_user[ironic-inspector@192.0.3.1]/ensure: created[0m >2018-06-26 09:34:06,099 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Db::Mysql/Openstacklib::Db::Mysql[ironic-inspector]/Openstacklib::Db::Mysql::Host_access[ironic-inspector_192.0.3.1]/Mysql_grant[ironic-inspector@192.0.3.1/ironic-inspector.*]/ensure: created[0m >2018-06-26 09:34:06,841 INFO: [mNotice: /Stage[main]/Ironic::Inspector::Db::Sync/Exec[ironic-inspector-dbsync]: Triggered 'refresh' from 3 events[0m >2018-06-26 09:34:06,842 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic-inspector::dbsync::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:34:06,842 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic-inspector::service::begin]: Triggered 'refresh' from 3 events[0m >2018-06-26 09:34:06,859 INFO: [mNotice: /Stage[main]/Apache/Apache::Vhost[default]/Concat[15-default.conf]/File[/etc/httpd/conf.d/15-default.conf]/ensure: defined content as '{md5}a430bf4e003be964b419e7aea251c6c4'[0m >2018-06-26 09:34:06,892 INFO: [mNotice: /Stage[main]/Keystone::Wsgi::Apache/Apache::Vhost[keystone_wsgi_main]/Concat[10-keystone_wsgi_main.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_main.conf]/ensure: defined content as '{md5}0079c46a03862dea493a94f251fa39f8'[0m >2018-06-26 09:34:06,909 INFO: [mNotice: /Stage[main]/Keystone::Wsgi::Apache/Apache::Vhost[keystone_wsgi_admin]/Concat[10-keystone_wsgi_admin.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_admin.conf]/ensure: defined content as '{md5}b959efb43272a365fd4d18e5082da204'[0m >2018-06-26 09:34:15,981 INFO: [mNotice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/Package[swift-account]/ensure: created[0m >2018-06-26 09:34:24,647 INFO: [mNotice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/Package[swift-container]/ensure: created[0m >2018-06-26 09:34:33,276 INFO: [mNotice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/Package[swift-object]/ensure: created[0m >2018-06-26 09:34:33,277 INFO: [mNotice: /Stage[main]/Swift::Deps/Anchor[swift::install::end]: Triggered 'refresh' from 5 events[0m >2018-06-26 09:34:33,285 INFO: [mNotice: /Stage[main]/Swift/File[/var/lib/swift]/group: group changed 'root' to 'swift'[0m >2018-06-26 09:34:33,287 INFO: [mNotice: /Stage[main]/Swift/File[/etc/swift/swift.conf]/owner: owner changed 'root' to 'swift'[0m >2018-06-26 09:34:33,290 INFO: [mNotice: /Stage[main]/Swift/Swift_config[swift-hash/swift_hash_path_suffix]/value: value changed '%SWIFT_HASH_PATH_SUFFIX%' to '78dd74404f502708b1c6ec550569fbda3a84ff56'[0m >2018-06-26 09:34:33,291 INFO: [mNotice: /Stage[main]/Swift/Swift_config[swift-constraints/max_header_size]/ensure: created[0m >2018-06-26 09:34:33,293 INFO: [mNotice: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:33,293 INFO: [1;33mWarning: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Skipping because of failed dependencies[0m >2018-06-26 09:34:33,295 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/bind_ip]/ensure: created[0m >2018-06-26 09:34:33,297 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/workers]/value: value changed '8' to '12'[0m >2018-06-26 09:34:33,299 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_name]/ensure: created[0m >2018-06-26 09:34:33,301 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_facility]/ensure: created[0m >2018-06-26 09:34:33,302 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_level]/ensure: created[0m >2018-06-26 09:34:33,303 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_headers]/ensure: created[0m >2018-06-26 09:34:33,305 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_address]/ensure: created[0m >2018-06-26 09:34:33,308 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift_proxy_config[pipeline:main/pipeline]/value: value changed 'catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server' to 'catch_errors healthcheck proxy-logging cache ratelimit bulk tempurl formpost authtoken keystone staticweb copy slo dlo versioned_writes proxy-logging proxy-server'[0m >2018-06-26 09:34:33,310 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_name]/ensure: created[0m >2018-06-26 09:34:33,311 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_facility]/ensure: created[0m >2018-06-26 09:34:33,313 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_level]/ensure: created[0m >2018-06-26 09:34:33,314 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_address]/ensure: created[0m >2018-06-26 09:34:33,316 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/log_handoffs]/ensure: created[0m >2018-06-26 09:34:33,317 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/allow_account_management]/value: value changed 'true' to 'True'[0m >2018-06-26 09:34:33,319 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/account_autocreate]/value: value changed 'true' to 'True'[0m >2018-06-26 09:34:33,322 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/node_timeout]/ensure: created[0m >2018-06-26 09:34:33,323 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/cors_allow_origin]/ensure: created[0m >2018-06-26 09:34:33,324 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/strict_cors_mode]/ensure: created[0m >2018-06-26 09:34:33,331 INFO: [mNotice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_containers_per_extraction]/ensure: created[0m >2018-06-26 09:34:33,728 INFO: [mNotice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_failed_extractions]/ensure: created[0m >2018-06-26 09:34:33,729 INFO: [mNotice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_deletes_per_request]/ensure: created[0m >2018-06-26 09:34:33,731 INFO: [mNotice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/yield_frequency]/ensure: created[0m >2018-06-26 09:34:33,736 INFO: [mNotice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/reseller_prefix]/ensure: created[0m >2018-06-26 09:34:33,738 INFO: [mNotice: /Stage[main]/Swift::Proxy::Authtoken/File[/var/cache/swift]/mode: mode changed '0755' to '0700'[0m >2018-06-26 09:34:33,739 INFO: [mNotice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/log_name]/ensure: created[0m >2018-06-26 09:34:33,740 INFO: [mNotice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/signing_dir]/value: value changed '/tmp/keystone-signing-swift' to '/var/cache/swift'[0m >2018-06-26 09:34:33,743 INFO: [mNotice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_uri]/ensure: created[0m >2018-06-26 09:34:33,744 INFO: [mNotice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_url]/ensure: created[0m >2018-06-26 09:34:33,745 INFO: [mNotice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_plugin]/ensure: created[0m >2018-06-26 09:34:33,747 INFO: [mNotice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_domain_id]/ensure: created[0m >2018-06-26 09:34:33,748 INFO: [mNotice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/user_domain_id]/ensure: created[0m >2018-06-26 09:34:33,750 INFO: [mNotice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_name]/ensure: created[0m >2018-06-26 09:34:33,751 INFO: [mNotice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/username]/ensure: created[0m >2018-06-26 09:34:33,752 INFO: [mNotice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/password]/ensure: created[0m >2018-06-26 09:34:33,754 INFO: [mNotice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/delay_auth_decision]/ensure: created[0m >2018-06-26 09:34:33,755 INFO: [mNotice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/cache]/ensure: created[0m >2018-06-26 09:34:33,757 INFO: [mNotice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/include_service_catalog]/ensure: created[0m >2018-06-26 09:34:33,759 INFO: [mNotice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/use]/ensure: created[0m >2018-06-26 09:34:33,762 INFO: [mNotice: /Stage[main]/Swift::Proxy::Copy/Swift_proxy_config[filter:copy/object_post_as_copy]/value: value changed 'false' to 'True'[0m >2018-06-26 09:34:33,764 INFO: [mNotice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_segments]/ensure: created[0m >2018-06-26 09:34:33,766 INFO: [mNotice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_size]/ensure: created[0m >2018-06-26 09:34:33,767 INFO: [mNotice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/min_segment_size]/ensure: created[0m >2018-06-26 09:34:33,769 INFO: [mNotice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_after_segment]/ensure: created[0m >2018-06-26 09:34:33,770 INFO: [mNotice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_segments_per_sec]/ensure: created[0m >2018-06-26 09:34:33,772 INFO: [mNotice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_get_time]/ensure: created[0m >2018-06-26 09:34:33,774 INFO: [mNotice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_after_segment]/ensure: created[0m >2018-06-26 09:34:33,776 INFO: [mNotice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_segments_per_sec]/ensure: created[0m >2018-06-26 09:34:33,777 INFO: [mNotice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/max_get_time]/ensure: created[0m >2018-06-26 09:34:33,780 INFO: [mNotice: /Stage[main]/Swift::Proxy::Versioned_writes/Swift_proxy_config[filter:versioned_writes/allow_versioned_writes]/ensure: created[0m >2018-06-26 09:34:33,783 INFO: [mNotice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/clock_accuracy]/ensure: created[0m >2018-06-26 09:34:33,784 INFO: [mNotice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/max_sleep_time_seconds]/ensure: created[0m >2018-06-26 09:34:33,786 INFO: [mNotice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/log_sleep_time_seconds]/ensure: created[0m >2018-06-26 09:34:33,787 INFO: [mNotice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/rate_buffer_seconds]/ensure: created[0m >2018-06-26 09:34:33,789 INFO: [mNotice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/account_ratelimit]/ensure: created[0m >2018-06-26 09:34:33,797 INFO: [mNotice: /Stage[main]/Swift::Proxy::Formpost/Swift_proxy_config[filter:formpost/use]/ensure: created[0m >2018-06-26 09:34:33,800 INFO: [mNotice: /Stage[main]/Main/File[/srv/node]/ensure: created[0m >2018-06-26 09:34:33,802 INFO: [mNotice: /Stage[main]/Main/File[/srv/node/1]/ensure: created[0m >2018-06-26 09:34:34,120 INFO: [mNotice: /Stage[main]/Swift::Ringbuilder/Swift::Ringbuilder::Create[object]/Exec[create_object]/returns: executed successfully[0m >2018-06-26 09:34:34,385 INFO: [1;33mWarning: Unexpected line: Ring file /etc/swift/object.ring.gz not found, probably it hasn't been written yet[0m >2018-06-26 09:34:34,386 INFO: [1;33mWarning: Unexpected line: Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta[0m >2018-06-26 09:34:34,386 INFO: [1;33mWarning: Unexpected line: There are no devices in this ring, or all devices have been deleted[0m >2018-06-26 09:34:34,669 INFO: [mNotice: /Stage[main]/Main/Ring_object_device[192.0.3.1:6000/1]/ensure: created[0m >2018-06-26 09:34:34,934 INFO: [mNotice: /Stage[main]/Swift::Ringbuilder/Swift::Ringbuilder::Create[account]/Exec[create_account]/returns: executed successfully[0m >2018-06-26 09:34:35,206 INFO: [1;33mWarning: Unexpected line: Ring file /etc/swift/account.ring.gz not found, probably it hasn't been written yet[0m >2018-06-26 09:34:35,207 INFO: [1;33mWarning: Unexpected line: Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta[0m >2018-06-26 09:34:35,207 INFO: [1;33mWarning: Unexpected line: There are no devices in this ring, or all devices have been deleted[0m >2018-06-26 09:34:35,478 INFO: [mNotice: /Stage[main]/Main/Ring_account_device[192.0.3.1:6002/1]/ensure: created[0m >2018-06-26 09:34:35,737 INFO: [mNotice: /Stage[main]/Swift::Ringbuilder/Swift::Ringbuilder::Create[container]/Exec[create_container]/returns: executed successfully[0m >2018-06-26 09:34:36,005 INFO: [1;33mWarning: Unexpected line: Ring file /etc/swift/container.ring.gz not found, probably it hasn't been written yet[0m >2018-06-26 09:34:36,005 INFO: [1;33mWarning: Unexpected line: Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta[0m >2018-06-26 09:34:36,006 INFO: [1;33mWarning: Unexpected line: There are no devices in this ring, or all devices have been deleted[0m >2018-06-26 09:34:36,290 INFO: [mNotice: /Stage[main]/Main/Ring_container_device[192.0.3.1:6001/1]/ensure: created[0m >2018-06-26 09:34:36,609 INFO: [mNotice: /Stage[main]/Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[object]/Exec[rebalance_object]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:34:36,921 INFO: [mNotice: /Stage[main]/Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[account]/Exec[rebalance_account]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:34:37,236 INFO: [mNotice: /Stage[main]/Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[container]/Exec[rebalance_container]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:34:37,239 INFO: [mNotice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/File[/etc/swift/account-server/]/owner: owner changed 'root' to 'swift'[0m >2018-06-26 09:34:37,239 INFO: [mNotice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/File[/etc/swift/account-server/]/group: group changed 'root' to 'swift'[0m >2018-06-26 09:34:37,242 INFO: [mNotice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/File[/etc/swift/container-server/]/owner: owner changed 'root' to 'swift'[0m >2018-06-26 09:34:37,243 INFO: [mNotice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/File[/etc/swift/container-server/]/group: group changed 'root' to 'swift'[0m >2018-06-26 09:34:37,245 INFO: [mNotice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/File[/etc/swift/object-server/]/owner: owner changed 'root' to 'swift'[0m >2018-06-26 09:34:37,246 INFO: [mNotice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/File[/etc/swift/object-server/]/group: group changed 'root' to 'swift'[0m >2018-06-26 09:34:37,262 INFO: [mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/etc/swift/account-server.conf]/content: content changed '{md5}07e5a1a1e5a0ab83d745e20680eb32c1' to '{md5}ee1c0ba7301194a826ff8331d5631f4c'[0m >2018-06-26 09:34:37,262 INFO: [mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/etc/swift/account-server.conf]/mode: mode changed '0640' to '0644'[0m >2018-06-26 09:34:37,274 INFO: [mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/etc/swift/container-server.conf]/content: content changed '{md5}4998257eb89ff63e838b37686ebb1ee7' to '{md5}48c0b8a6ff19fb18732b2c212a6102fa'[0m >2018-06-26 09:34:37,274 INFO: [mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/etc/swift/container-server.conf]/mode: mode changed '0640' to '0644'[0m >2018-06-26 09:34:37,286 INFO: [mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/etc/swift/object-server.conf]/content: content changed '{md5}8c3bfdea900f37c8b2cbd5d9fe5d664c' to '{md5}6fd9041f83a127e69e8e0d04f1ead2d8'[0m >2018-06-26 09:34:37,286 INFO: [mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/etc/swift/object-server.conf]/mode: mode changed '0640' to '0644'[0m >2018-06-26 09:34:37,288 INFO: [mNotice: /Stage[main]/Swift::Deps/Anchor[swift::config::end]: Triggered 'refresh' from 62 events[0m >2018-06-26 09:34:37,289 INFO: [mNotice: /Stage[main]/Swift::Deps/Anchor[swift::service::begin]: Triggered 'refresh' from 3 events[0m >2018-06-26 09:34:37,296 INFO: [mNotice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created[0m >2018-06-26 09:34:37,300 INFO: [mNotice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[heat_api_wsgi]/ensure: defined content as '{md5}640891728ce5d46ae40234228561597c'[0m >2018-06-26 09:34:37,304 INFO: [mNotice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[heat_api_cfn_wsgi]/ensure: defined content as '{md5}c3ae61ab87649c8cdfab8977da2b194b'[0m >2018-06-26 09:34:37,694 INFO: [mNotice: /Stage[main]/Ironic::Pxe/Apache::Vhost[ipxe_vhost]/Concat[10-ipxe_vhost.conf]/File[/etc/httpd/conf.d/10-ipxe_vhost.conf]/ensure: defined content as '{md5}0ffa81700d1dc962149c4ec89737928f'[0m >2018-06-26 09:34:37,774 INFO: [mNotice: /Stage[main]/Mistral::Db::Mysql/Openstacklib::Db::Mysql[mistral]/Openstacklib::Db::Mysql::Host_access[mistral_%]/Mysql_user[mistral@%]/ensure: created[0m >2018-06-26 09:34:37,800 INFO: [mNotice: /Stage[main]/Mistral::Db::Mysql/Openstacklib::Db::Mysql[mistral]/Openstacklib::Db::Mysql::Host_access[mistral_%]/Mysql_grant[mistral@%/mistral.*]/ensure: created[0m >2018-06-26 09:34:37,895 INFO: [mNotice: /Stage[main]/Mistral::Db::Mysql/Openstacklib::Db::Mysql[mistral]/Openstacklib::Db::Mysql::Host_access[mistral_192.0.3.1]/Mysql_user[mistral@192.0.3.1]/ensure: created[0m >2018-06-26 09:34:37,925 INFO: [mNotice: /Stage[main]/Mistral::Db::Mysql/Openstacklib::Db::Mysql[mistral]/Openstacklib::Db::Mysql::Host_access[mistral_192.0.3.1]/Mysql_grant[mistral@192.0.3.1/mistral.*]/ensure: created[0m >2018-06-26 09:34:37,951 INFO: [mNotice: /Stage[main]/Mistral::Deps/Anchor[mistral::db::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:34:37,952 INFO: [mNotice: /Stage[main]/Mistral::Deps/Anchor[mistral::dbsync::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:37,952 INFO: [1;33mWarning: /Stage[main]/Mistral::Deps/Anchor[mistral::dbsync::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:34:37,953 INFO: [mNotice: /Stage[main]/Mistral::Db::Sync/Exec[mistral-db-sync]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:37,953 INFO: [1;33mWarning: /Stage[main]/Mistral::Db::Sync/Exec[mistral-db-sync]: Skipping because of failed dependencies[0m >2018-06-26 09:34:37,968 INFO: [mNotice: /Stage[main]/Tripleo::Ui/Apache::Vhost[tripleo-ui]/Concat[25-tripleo-ui.conf]/File[/etc/httpd/conf.d/25-tripleo-ui.conf]/ensure: defined content as '{md5}4f6b7eba2e32e551ae435d028eaab745'[0m >2018-06-26 09:34:38,047 INFO: [mNotice: /Stage[main]/Zaqar::Db::Mysql/Openstacklib::Db::Mysql[zaqar]/Openstacklib::Db::Mysql::Host_access[zaqar_%]/Mysql_user[zaqar@%]/ensure: created[0m >2018-06-26 09:34:38,072 INFO: [mNotice: /Stage[main]/Zaqar::Db::Mysql/Openstacklib::Db::Mysql[zaqar]/Openstacklib::Db::Mysql::Host_access[zaqar_%]/Mysql_grant[zaqar@%/zaqar.*]/ensure: created[0m >2018-06-26 09:34:38,169 INFO: [mNotice: /Stage[main]/Zaqar::Db::Mysql/Openstacklib::Db::Mysql[zaqar]/Openstacklib::Db::Mysql::Host_access[zaqar_192.0.3.1]/Mysql_user[zaqar@192.0.3.1]/ensure: created[0m >2018-06-26 09:34:38,195 INFO: [mNotice: /Stage[main]/Zaqar::Db::Mysql/Openstacklib::Db::Mysql[zaqar]/Openstacklib::Db::Mysql::Host_access[zaqar_192.0.3.1]/Mysql_grant[zaqar@192.0.3.1/zaqar.*]/ensure: created[0m >2018-06-26 09:34:38,221 INFO: [mNotice: /Stage[main]/Zaqar::Deps/Anchor[zaqar::db::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:34:38,221 INFO: [mNotice: /Stage[main]/Zaqar::Deps/Anchor[zaqar::dbsync::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:38,222 INFO: [1;33mWarning: /Stage[main]/Zaqar::Deps/Anchor[zaqar::dbsync::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:34:38,222 INFO: [mNotice: /Stage[main]/Zaqar::Db::Sync/Exec[zaqar-db-sync]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:38,222 INFO: [1;33mWarning: /Stage[main]/Zaqar::Db::Sync/Exec[zaqar-db-sync]: Skipping because of failed dependencies[0m >2018-06-26 09:34:38,223 INFO: [mNotice: /Stage[main]/Zaqar::Deps/Anchor[zaqar::dbsync::end]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:38,223 INFO: [1;33mWarning: /Stage[main]/Zaqar::Deps/Anchor[zaqar::dbsync::end]: Skipping because of failed dependencies[0m >2018-06-26 09:34:38,223 INFO: [mNotice: /Stage[main]/Zaqar::Deps/Anchor[zaqar::service::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:38,223 INFO: [1;33mWarning: /Stage[main]/Zaqar::Deps/Anchor[zaqar::service::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:34:38,224 INFO: [mNotice: /Stage[main]/Zaqar::Server/Service[openstack-zaqar]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:38,224 INFO: [1;33mWarning: /Stage[main]/Zaqar::Server/Service[openstack-zaqar]: Skipping because of failed dependencies[0m >2018-06-26 09:34:38,225 INFO: [mNotice: /Stage[main]/Main/Zaqar::Server_instance[1]/Service[openstack-zaqar@1]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:38,225 INFO: [1;33mWarning: /Stage[main]/Main/Zaqar::Server_instance[1]/Service[openstack-zaqar@1]: Skipping because of failed dependencies[0m >2018-06-26 09:34:38,225 INFO: [mNotice: /Stage[main]/Zaqar::Deps/Anchor[zaqar::service::end]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:38,226 INFO: [1;33mWarning: /Stage[main]/Zaqar::Deps/Anchor[zaqar::service::end]: Skipping because of failed dependencies[0m >2018-06-26 09:34:38,248 INFO: [mNotice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/Apache::Vhost[nova_api_wsgi]/Concat[10-nova_api_wsgi.conf]/File[/etc/httpd/conf.d/10-nova_api_wsgi.conf]/ensure: defined content as '{md5}2c041bd06ac16b6ff7731b11dc905c1b'[0m >2018-06-26 09:34:38,263 INFO: [mNotice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/Apache::Vhost[placement_wsgi]/Concat[10-placement_wsgi.conf]/File[/etc/httpd/conf.d/10-placement_wsgi.conf]/ensure: defined content as '{md5}cc41af65349be3524c45160ba4c6d8a2'[0m >2018-06-26 09:34:38,286 INFO: [mNotice: /Stage[main]/Ironic::Wsgi::Apache/Openstacklib::Wsgi::Apache[ironic_wsgi]/Apache::Vhost[ironic_wsgi]/Concat[10-ironic_wsgi.conf]/File[/etc/httpd/conf.d/10-ironic_wsgi.conf]/ensure: defined content as '{md5}560f05bd28a289655baf29c5a4d47e0c'[0m >2018-06-26 09:34:38,302 INFO: [mNotice: /Stage[main]/Zaqar::Wsgi::Apache/Openstacklib::Wsgi::Apache[zaqar_wsgi]/Apache::Vhost[zaqar_wsgi]/Concat[10-zaqar_wsgi.conf]/File[/etc/httpd/conf.d/10-zaqar_wsgi.conf]/ensure: defined content as '{md5}446da5be43852dd3c3427f80b519cd7d'[0m >2018-06-26 09:34:38,319 INFO: [mNotice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/Apache::Vhost[heat_api_wsgi]/Concat[10-heat_api_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_wsgi.conf]/ensure: defined content as '{md5}4a7ec1a5a7e11ca0f268d0fcbbef166a'[0m >2018-06-26 09:34:38,334 INFO: [mNotice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/Apache::Vhost[heat_api_cfn_wsgi]/Concat[10-heat_api_cfn_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_cfn_wsgi.conf]/ensure: defined content as '{md5}fddbf8a4bb37d078d2168b11f9a87801'[0m >2018-06-26 09:34:38,346 INFO: [mNotice: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]: Triggered 'refresh' from 41 events[0m >2018-06-26 09:34:39,458 INFO: [mNotice: /Stage[main]/Keystone/Exec[keystone-manage fernet_setup]: Triggered 'refresh' from 2 events[0m >2018-06-26 09:34:40,427 INFO: [mNotice: /Stage[main]/Keystone/Exec[keystone-manage credential_setup]: Triggered 'refresh' from 2 events[0m >2018-06-26 09:34:40,455 INFO: [mNotice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]/ensure: created[0m >2018-06-26 09:34:40,527 INFO: [mNotice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_user[keystone@%]/ensure: created[0m >2018-06-26 09:34:40,883 INFO: [mNotice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_grant[keystone@%/keystone.*]/ensure: created[0m >2018-06-26 09:34:40,981 INFO: [mNotice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_192.0.3.1]/Mysql_user[keystone@192.0.3.1]/ensure: created[0m >2018-06-26 09:34:41,007 INFO: [mNotice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_192.0.3.1]/Mysql_grant[keystone@192.0.3.1/keystone.*]/ensure: created[0m >2018-06-26 09:34:41,033 INFO: [mNotice: /Stage[main]/Keystone::Deps/Anchor[keystone::db::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:34:41,035 INFO: [mNotice: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:41,035 INFO: [1;33mWarning: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:34:41,035 INFO: [mNotice: /Stage[main]/Keystone::Db::Sync/Exec[keystone-manage db_sync]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:41,035 INFO: [1;33mWarning: /Stage[main]/Keystone::Db::Sync/Exec[keystone-manage db_sync]: Skipping because of failed dependencies[0m >2018-06-26 09:34:41,036 INFO: [mNotice: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:41,036 INFO: [1;33mWarning: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]: Skipping because of failed dependencies[0m >2018-06-26 09:34:41,036 INFO: [mNotice: /Stage[main]/Keystone/Exec[keystone-manage bootstrap]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:41,037 INFO: [1;33mWarning: /Stage[main]/Keystone/Exec[keystone-manage bootstrap]: Skipping because of failed dependencies[0m >2018-06-26 09:34:41,037 INFO: [mNotice: /Stage[main]/Keystone::Deps/Anchor[keystone::service::begin]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:41,037 INFO: [1;33mWarning: /Stage[main]/Keystone::Deps/Anchor[keystone::service::begin]: Skipping because of failed dependencies[0m >2018-06-26 09:34:41,038 INFO: [mNotice: /Stage[main]/Apache::Service/Service[httpd]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:41,038 INFO: [1;33mWarning: /Stage[main]/Apache::Service/Service[httpd]: Skipping because of failed dependencies[0m >2018-06-26 09:34:41,039 INFO: [mNotice: /Stage[main]/Keystone::Deps/Anchor[keystone::service::end]: Dependency Package[neutron] has failures: true[0m >2018-06-26 09:34:41,039 INFO: [1;33mWarning: /Stage[main]/Keystone::Deps/Anchor[keystone::service::end]: Skipping because of failed dependencies[0m >2018-06-26 09:37:34,892 INFO: [1;31mError: Failed to apply catalog: Execution of '/bin/openstack role list --quiet --format csv' returned 1: Unable to establish connection to http://192.0.3.1:35357/v3/roles?: HTTPConnectionPool(host='192.0.3.1', port=35357): Max retries exceeded with url: /v3/roles (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f3622b07e10>: Failed to establish a new connection: [Errno 111] Connection refused',)) (tried 45, for a total of 170 seconds)[0m >2018-06-26 09:37:34,893 INFO: Changes: >2018-06-26 09:37:34,893 INFO: Total: 879 >2018-06-26 09:37:34,893 INFO: Events: >2018-06-26 09:37:34,893 INFO: Failure: 1 >2018-06-26 09:37:34,893 INFO: Success: 879 >2018-06-26 09:37:34,893 INFO: Total: 880 >2018-06-26 09:37:34,893 INFO: Resources: >2018-06-26 09:37:34,893 INFO: Failed: 1 >2018-06-26 09:37:34,893 INFO: Total: 2667 >2018-06-26 09:37:34,894 INFO: Skipped: 375 >2018-06-26 09:37:34,894 INFO: Restarted: 49 >2018-06-26 09:37:34,894 INFO: Changed: 868 >2018-06-26 09:37:34,894 INFO: Out of sync: 869 >2018-06-26 09:37:34,894 INFO: Time: >2018-06-26 09:37:34,894 INFO: Policy rcd: 0.00 >2018-06-26 09:37:34,894 INFO: Resources: 0.00 >2018-06-26 09:37:34,894 INFO: Swift config: 0.00 >2018-06-26 09:37:34,894 INFO: Concat file: 0.00 >2018-06-26 09:37:34,894 INFO: Glance swift config: 0.00 >2018-06-26 09:37:34,895 INFO: Nova paste api ini: 0.00 >2018-06-26 09:37:34,895 INFO: Anchor: 0.01 >2018-06-26 09:37:34,895 INFO: Concat fragment: 0.01 >2018-06-26 09:37:34,895 INFO: Sysctl: 0.01 >2018-06-26 09:37:34,895 INFO: Swift object expirer config: 0.01 >2018-06-26 09:37:34,895 INFO: Group: 0.02 >2018-06-26 09:37:34,895 INFO: Sysctl runtime: 0.02 >2018-06-26 09:37:34,895 INFO: Cron: 0.02 >2018-06-26 09:37:34,895 INFO: Archive: 0.03 >2018-06-26 09:37:34,896 INFO: Vs bridge: 0.05 >2018-06-26 09:37:34,896 INFO: User: 0.11 >2018-06-26 09:37:34,896 INFO: Mysql database: 0.24 >2018-06-26 09:37:34,896 INFO: Glance cache config: 0.29 >2018-06-26 09:37:34,896 INFO: Mistral config: 0.42 >2018-06-26 09:37:34,896 INFO: Glance registry config: 0.42 >2018-06-26 09:37:34,896 INFO: Swift proxy config: 0.48 >2018-06-26 09:37:34,896 INFO: Ring account device: 0.54 >2018-06-26 09:37:34,896 INFO: Ring object device: 0.55 >2018-06-26 09:37:34,897 INFO: Ring container device: 0.55 >2018-06-26 09:37:34,897 INFO: Ironic inspector config: 0.69 >2018-06-26 09:37:34,897 INFO: Zaqar config: 0.85 >2018-06-26 09:37:34,897 INFO: Augeas: 1.12 >2018-06-26 09:37:34,897 INFO: Mysql grant: 1.33 >2018-06-26 09:37:34,897 INFO: Mysql user: 1.41 >2018-06-26 09:37:34,897 INFO: Keystone config: 1.81 >2018-06-26 09:37:34,897 INFO: Mysql datadir: 10.65 >2018-06-26 09:37:34,897 INFO: Exec: 14.89 >2018-06-26 09:37:34,897 INFO: Nova config: 15.16 >2018-06-26 09:37:34,898 INFO: Last run: 1529986054 >2018-06-26 09:37:34,898 INFO: Config retrieval: 19.98 >2018-06-26 09:37:34,898 INFO: File: 2.07 >2018-06-26 09:37:34,898 INFO: Heat config: 2.21 >2018-06-26 09:37:34,898 INFO: Rabbitmq plugin: 2.23 >2018-06-26 09:37:34,898 INFO: Glance api config: 2.52 >2018-06-26 09:37:34,898 INFO: Nova cell v2: 2.96 >2018-06-26 09:37:34,898 INFO: Firewall: 29.28 >2018-06-26 09:37:34,898 INFO: Ironic config: 5.57 >2018-06-26 09:37:34,898 INFO: Package: 567.79 >2018-06-26 09:37:34,899 INFO: Total: 695.90 >2018-06-26 09:37:34,899 INFO: Service: 9.56 >2018-06-26 09:37:34,899 INFO: Version: >2018-06-26 09:37:34,899 INFO: Config: 1529985166 >2018-06-26 09:37:34,899 INFO: Puppet: 4.8.2 >2018-06-26 09:37:45,648 INFO: + rc=1 >2018-06-26 09:37:45,648 INFO: + set -e >2018-06-26 09:37:45,648 INFO: + echo 'puppet apply exited with exit code 1' >2018-06-26 09:37:45,648 INFO: puppet apply exited with exit code 1 >2018-06-26 09:37:45,648 INFO: + '[' 1 '!=' 2 -a 1 '!=' 0 ']' >2018-06-26 09:37:45,648 INFO: + exit 1 >2018-06-26 09:37:45,649 INFO: [2018-06-26 09:37:45,648] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] >2018-06-26 09:37:45,649 INFO: >2018-06-26 09:37:45,649 INFO: [2018-06-26 09:37:45,649] (os-refresh-config) [ERROR] Aborting... >2018-06-26 09:37:45,658 DEBUG: An exception occurred >Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 2330, in install > _run_orc(instack_env) > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 1597, in _run_orc > _run_live_command(args, instack_env, 'os-refresh-config') > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 676, in _run_live_command > raise RuntimeError('%s failed. See log for details.' % name) >RuntimeError: os-refresh-config failed. See log for details. >2018-06-26 09:37:45,677 ERROR: >############################################################################# >Undercloud install failed. > >Reason: os-refresh-config failed. See log for details. > >See the previous output for details about what went wrong. The full install >log can be found at /home/sudheer/.instack/install-undercloud.log. > >############################################################################# > >2018-06-26 09:48:54,393 INFO: Logging to /home/sudheer/.instack/install-undercloud.log >2018-06-26 09:48:54,491 INFO: Checking for a FQDN hostname... >2018-06-26 09:48:54,512 INFO: Static hostname detected as facebook.local.com >2018-06-26 09:48:54,529 INFO: Transient hostname detected as facebook.local.com >2018-06-26 09:48:54,530 WARNING: Option "undercloud_public_vip" from group "DEFAULT" is deprecated. Use option "undercloud_public_host" from group "DEFAULT". >2018-06-26 09:48:54,530 WARNING: Option "undercloud_admin_vip" from group "DEFAULT" is deprecated. Use option "undercloud_admin_host" from group "DEFAULT". >2018-06-26 09:48:54,531 WARNING: Option "masquerade_network" from group "DEFAULT" is deprecated for removal (With support for routed networks, masquerading of the provisioning networks is moved to a boolean option for each subnet.). Its value may be silently ignored in the future. >2018-06-26 09:48:54,531 WARNING: Option "ipxe_deploy" from group "DEFAULT" is deprecated. Use option "ipxe_enabled" from group "DEFAULT". >2018-06-26 09:48:54,532 WARNING: Option "network_cidr" from group "DEFAULT" is deprecated. Use option "cidr" from group "ctlplane-subnet". >2018-06-26 09:48:54,532 WARNING: Option "dhcp_start" from group "DEFAULT" is deprecated. Use option "dhcp_start" from group "ctlplane-subnet". >2018-06-26 09:48:54,532 WARNING: Option "dhcp_end" from group "DEFAULT" is deprecated. Use option "dhcp_end" from group "ctlplane-subnet". >2018-06-26 09:48:54,532 WARNING: Option "inspection_iprange" from group "DEFAULT" is deprecated. Use option "inspection_iprange" from group "ctlplane-subnet". >2018-06-26 09:48:54,532 WARNING: Option "network_gateway" from group "DEFAULT" is deprecated. Use option "gateway" from group "ctlplane-subnet". >2018-06-26 09:48:54,570 INFO: Running yum clean all >2018-06-26 09:48:54,732 INFO: Loaded plugins: langpacks, product-id, search-disabled-repos, subscription- >2018-06-26 09:48:54,732 INFO: : manager >2018-06-26 09:48:58,843 INFO: Cleaning repos: rhel-7-server-extras-rpms rhel-7-server-openstack-beta-rpms >2018-06-26 09:48:58,843 INFO: : rhel-7-server-rh-common-rpms rhel-7-server-rpms >2018-06-26 09:48:58,843 INFO: : rhel-ha-for-rhel-7-server-rpms >2018-06-26 09:48:58,843 INFO: Cleaning up everything >2018-06-26 09:48:58,844 INFO: Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos >2018-06-26 09:48:58,930 INFO: yum-clean-all completed successfully >2018-06-26 09:48:58,931 INFO: Running yum update >2018-06-26 09:48:59,104 INFO: Loaded plugins: langpacks, product-id, search-disabled-repos, subscription- >2018-06-26 09:48:59,105 INFO: : manager >2018-06-26 09:49:33,837 INFO: No packages marked for update >2018-06-26 09:49:33,888 INFO: yum-update completed successfully >2018-06-26 09:49:34,047 INFO: Running instack >2018-06-26 09:49:34,197 INFO: INFO: 2018-06-26 09:49:34,197 -- Starting run of instack >2018-06-26 09:49:34,204 INFO: INFO: 2018-06-26 09:49:34,204 -- Using json file: /usr/share/instack-undercloud/json-files/rhel-7-undercloud-packages.json >2018-06-26 09:49:34,204 INFO: INFO: 2018-06-26 09:49:34,204 -- Running Installation >2018-06-26 09:49:34,205 INFO: INFO: 2018-06-26 09:49:34,205 -- Initialized with elements path: /usr/share/tripleo-puppet-elements /usr/share/instack-undercloud /usr/share/tripleo-image-elements /usr/share/diskimage-builder/elements >2018-06-26 09:49:34,215 INFO: WARNING: 2018-06-26 09:49:34,215 -- expand_dependencies() deprecated, use get_elements >2018-06-26 09:49:34,230 INFO: INFO: 2018-06-26 09:49:34,229 -- List of all elements and dependencies: undercloud-install dib-python source-repositories install-types puppet-modules install-bin pip-manifest puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url pkg-map enable-packages-install puppet os-apply-config hiera package-installs >2018-06-26 09:49:34,230 INFO: INFO: 2018-06-26 09:49:34,229 -- Excluding element pip-and-virtualenv >2018-06-26 09:49:34,230 INFO: INFO: 2018-06-26 09:49:34,230 -- Excluding element pip-manifest >2018-06-26 09:49:34,230 INFO: INFO: 2018-06-26 09:49:34,230 -- Excluding element package-installs >2018-06-26 09:49:34,230 INFO: INFO: 2018-06-26 09:49:34,230 -- Excluding element pkg-map >2018-06-26 09:49:34,230 INFO: INFO: 2018-06-26 09:49:34,230 -- Excluding element puppet >2018-06-26 09:49:34,230 INFO: INFO: 2018-06-26 09:49:34,230 -- Excluding element cache-url >2018-06-26 09:49:34,231 INFO: INFO: 2018-06-26 09:49:34,230 -- Excluding element dib-python >2018-06-26 09:49:34,231 INFO: INFO: 2018-06-26 09:49:34,230 -- Excluding element install-bin >2018-06-26 09:49:34,231 INFO: INFO: 2018-06-26 09:49:34,230 -- List of all elements and dependencies after excludes: undercloud-install source-repositories install-types puppet-modules puppet-stack-config os-refresh-config element-manifest manifests enable-packages-install os-apply-config hiera >2018-06-26 09:49:34,349 INFO: INFO: 2018-06-26 09:49:34,348 -- Running hook extra-data >2018-06-26 09:49:34,349 INFO: INFO: 2018-06-26 09:49:34,349 -- ############### Begin stdout/stderr logging ############### >2018-06-26 09:49:34,360 INFO: dib-run-parts Sourcing environment file /tmp/tmpTGWWp9/extra-data.d/../environment.d/00-dib-v2-env >2018-06-26 09:49:34,361 INFO: + source /tmp/tmpTGWWp9/extra-data.d/../environment.d/00-dib-v2-env >2018-06-26 09:49:34,362 INFO: ++ export 'IMAGE_ELEMENT=undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs' >2018-06-26 09:49:34,362 INFO: ++ IMAGE_ELEMENT='undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs' >2018-06-26 09:49:34,362 INFO: ++ export 'IMAGE_ELEMENT_YAML={cache-url: /usr/share/diskimage-builder/elements/cache-url, dib-python: /usr/share/diskimage-builder/elements/dib-python, >2018-06-26 09:49:34,362 INFO: element-manifest: /usr/share/diskimage-builder/elements/element-manifest, enable-packages-install: /usr/share/tripleo-image-elements/enable-packages-install, >2018-06-26 09:49:34,362 INFO: hiera: /usr/share/tripleo-puppet-elements/hiera, install-bin: /usr/share/diskimage-builder/elements/install-bin, >2018-06-26 09:49:34,362 INFO: install-types: /usr/share/diskimage-builder/elements/install-types, manifests: /usr/share/diskimage-builder/elements/manifests, >2018-06-26 09:49:34,363 INFO: os-apply-config: /usr/share/tripleo-image-elements/os-apply-config, os-refresh-config: /usr/share/tripleo-image-elements/os-refresh-config, >2018-06-26 09:49:34,363 INFO: package-installs: /usr/share/diskimage-builder/elements/package-installs, pip-and-virtualenv: /usr/share/diskimage-builder/elements/pip-and-virtualenv, >2018-06-26 09:49:34,363 INFO: pip-manifest: /usr/share/tripleo-image-elements/pip-manifest, pkg-map: /usr/share/diskimage-builder/elements/pkg-map, >2018-06-26 09:49:34,363 INFO: puppet: /usr/share/tripleo-puppet-elements/puppet, puppet-modules: /usr/share/tripleo-puppet-elements/puppet-modules, >2018-06-26 09:49:34,363 INFO: puppet-stack-config: /usr/share/instack-undercloud/puppet-stack-config, source-repositories: /usr/share/diskimage-builder/elements/source-repositories, >2018-06-26 09:49:34,363 INFO: undercloud-install: /usr/share/instack-undercloud/undercloud-install} >2018-06-26 09:49:34,364 INFO: ' >2018-06-26 09:49:34,364 INFO: ++ IMAGE_ELEMENT_YAML='{cache-url: /usr/share/diskimage-builder/elements/cache-url, dib-python: /usr/share/diskimage-builder/elements/dib-python, >2018-06-26 09:49:34,364 INFO: element-manifest: /usr/share/diskimage-builder/elements/element-manifest, enable-packages-install: /usr/share/tripleo-image-elements/enable-packages-install, >2018-06-26 09:49:34,364 INFO: hiera: /usr/share/tripleo-puppet-elements/hiera, install-bin: /usr/share/diskimage-builder/elements/install-bin, >2018-06-26 09:49:34,364 INFO: install-types: /usr/share/diskimage-builder/elements/install-types, manifests: /usr/share/diskimage-builder/elements/manifests, >2018-06-26 09:49:34,364 INFO: os-apply-config: /usr/share/tripleo-image-elements/os-apply-config, os-refresh-config: /usr/share/tripleo-image-elements/os-refresh-config, >2018-06-26 09:49:34,364 INFO: package-installs: /usr/share/diskimage-builder/elements/package-installs, pip-and-virtualenv: /usr/share/diskimage-builder/elements/pip-and-virtualenv, >2018-06-26 09:49:34,365 INFO: pip-manifest: /usr/share/tripleo-image-elements/pip-manifest, pkg-map: /usr/share/diskimage-builder/elements/pkg-map, >2018-06-26 09:49:34,365 INFO: puppet: /usr/share/tripleo-puppet-elements/puppet, puppet-modules: /usr/share/tripleo-puppet-elements/puppet-modules, >2018-06-26 09:49:34,365 INFO: puppet-stack-config: /usr/share/instack-undercloud/puppet-stack-config, source-repositories: /usr/share/diskimage-builder/elements/source-repositories, >2018-06-26 09:49:34,365 INFO: undercloud-install: /usr/share/instack-undercloud/undercloud-install} >2018-06-26 09:49:34,365 INFO: ' >2018-06-26 09:49:34,365 INFO: ++ export -f get_image_element_array >2018-06-26 09:49:34,365 INFO: + set +o xtrace >2018-06-26 09:49:34,365 INFO: dib-run-parts Sourcing environment file /tmp/tmpTGWWp9/extra-data.d/../environment.d/01-export-install-types.bash >2018-06-26 09:49:34,366 INFO: + source /tmp/tmpTGWWp9/extra-data.d/../environment.d/01-export-install-types.bash >2018-06-26 09:49:34,366 INFO: ++ export DIB_DEFAULT_INSTALLTYPE=package >2018-06-26 09:49:34,366 INFO: ++ DIB_DEFAULT_INSTALLTYPE=package >2018-06-26 09:49:34,366 INFO: + set +o xtrace >2018-06-26 09:49:34,366 INFO: dib-run-parts Sourcing environment file /tmp/tmpTGWWp9/extra-data.d/../environment.d/01-puppet-module-pins.sh >2018-06-26 09:49:34,366 INFO: + source /tmp/tmpTGWWp9/extra-data.d/../environment.d/01-puppet-module-pins.sh >2018-06-26 09:49:34,366 INFO: ++ export DIB_REPOREF_puppetlabs_ntp=4.2.x >2018-06-26 09:49:34,366 INFO: ++ DIB_REPOREF_puppetlabs_ntp=4.2.x >2018-06-26 09:49:34,366 INFO: + set +o xtrace >2018-06-26 09:49:34,367 INFO: dib-run-parts Sourcing environment file /tmp/tmpTGWWp9/extra-data.d/../environment.d/02-puppet-modules-install-types.sh >2018-06-26 09:49:34,368 INFO: + source /tmp/tmpTGWWp9/extra-data.d/../environment.d/02-puppet-modules-install-types.sh >2018-06-26 09:49:34,368 INFO: ++ DIB_DEFAULT_INSTALLTYPE=package >2018-06-26 09:49:34,368 INFO: ++ DIB_INSTALLTYPE_puppet_modules=package >2018-06-26 09:49:34,368 INFO: ++ '[' package = source ']' >2018-06-26 09:49:34,368 INFO: + set +o xtrace >2018-06-26 09:49:34,369 INFO: dib-run-parts Sourcing environment file /tmp/tmpTGWWp9/extra-data.d/../environment.d/10-os-apply-config-venv-dir.bash >2018-06-26 09:49:34,370 INFO: + source /tmp/tmpTGWWp9/extra-data.d/../environment.d/10-os-apply-config-venv-dir.bash >2018-06-26 09:49:34,370 INFO: ++ '[' -z '' ']' >2018-06-26 09:49:34,371 INFO: ++ export OS_APPLY_CONFIG_VENV_DIR=/opt/stack/venvs/os-apply-config >2018-06-26 09:49:34,371 INFO: ++ OS_APPLY_CONFIG_VENV_DIR=/opt/stack/venvs/os-apply-config >2018-06-26 09:49:34,371 INFO: + set +o xtrace >2018-06-26 09:49:34,371 INFO: dib-run-parts Sourcing environment file /tmp/tmpTGWWp9/extra-data.d/../environment.d/14-manifests >2018-06-26 09:49:34,373 INFO: + source /tmp/tmpTGWWp9/extra-data.d/../environment.d/14-manifests >2018-06-26 09:49:34,373 INFO: ++ export DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests >2018-06-26 09:49:34,373 INFO: ++ DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests >2018-06-26 09:49:34,373 INFO: ++ export DIB_MANIFEST_SAVE_DIR=instack.d/ >2018-06-26 09:49:34,373 INFO: ++ DIB_MANIFEST_SAVE_DIR=instack.d/ >2018-06-26 09:49:34,374 INFO: + set +o xtrace >2018-06-26 09:49:34,374 INFO: dib-run-parts Running /tmp/tmpTGWWp9/extra-data.d/10-install-git >2018-06-26 09:49:34,376 INFO: + yum -y install git >2018-06-26 09:49:34,525 INFO: Loaded plugins: langpacks, product-id, search-disabled-repos, subscription- >2018-06-26 09:49:34,525 INFO: : manager >2018-06-26 09:49:38,964 INFO: Package git-1.8.3.1-14.el7_5.x86_64 already installed and latest version >2018-06-26 09:49:38,964 INFO: Nothing to do >2018-06-26 09:49:39,016 INFO: dib-run-parts 10-install-git completed >2018-06-26 09:49:39,016 INFO: dib-run-parts Running /tmp/tmpTGWWp9/extra-data.d/20-manifest-dir >2018-06-26 09:49:39,019 INFO: + set -eu >2018-06-26 09:49:39,020 INFO: + set -o pipefail >2018-06-26 09:49:39,020 INFO: + sudo mkdir -p /tmp/instack.wahe4V/mnt//etc/dib-manifests >2018-06-26 09:49:39,036 INFO: dib-run-parts 20-manifest-dir completed >2018-06-26 09:49:39,036 INFO: dib-run-parts Running /tmp/tmpTGWWp9/extra-data.d/75-inject-element-manifest >2018-06-26 09:49:39,039 INFO: + set -eu >2018-06-26 09:49:39,039 INFO: + set -o pipefail >2018-06-26 09:49:39,039 INFO: + DIB_ELEMENT_MANIFEST_PATH=/etc/dib-manifests/dib-element-manifest >2018-06-26 09:49:39,039 INFO: ++ dirname /etc/dib-manifests/dib-element-manifest >2018-06-26 09:49:39,040 INFO: + sudo mkdir -p /tmp/instack.wahe4V/mnt//etc/dib-manifests >2018-06-26 09:49:39,052 INFO: + sudo /bin/bash -c 'echo undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs | tr '\'' '\'' '\''\n'\'' > /tmp/instack.wahe4V/mnt//etc/dib-manifests/dib-element-manifest' >2018-06-26 09:49:39,068 INFO: dib-run-parts 75-inject-element-manifest completed >2018-06-26 09:49:39,069 INFO: dib-run-parts Running /tmp/tmpTGWWp9/extra-data.d/98-source-repositories >2018-06-26 09:49:39,080 INFO: Getting /root/.cache/image-create/source-repositories/repositories_flock: Tue Jun 26 09:49:39 IST 2018 for /tmp/tmpTGWWp9/source-repository-puppet-modules >2018-06-26 09:49:39,084 INFO: (0001 / 0081) >2018-06-26 09:49:39,088 INFO: puppetlabs-apache install type not set to source >2018-06-26 09:49:39,089 INFO: (0002 / 0081) >2018-06-26 09:49:39,093 INFO: puppet-aodh install type not set to source >2018-06-26 09:49:39,094 INFO: (0003 / 0081) >2018-06-26 09:49:39,098 INFO: puppet-auditd install type not set to source >2018-06-26 09:49:39,098 INFO: (0004 / 0081) >2018-06-26 09:49:39,102 INFO: puppet-barbican install type not set to source >2018-06-26 09:49:39,103 INFO: (0005 / 0081) >2018-06-26 09:49:39,107 INFO: puppet-cassandra install type not set to source >2018-06-26 09:49:39,108 INFO: (0006 / 0081) >2018-06-26 09:49:39,111 INFO: puppet-ceph install type not set to source >2018-06-26 09:49:39,112 INFO: (0007 / 0081) >2018-06-26 09:49:39,116 INFO: puppet-ceilometer install type not set to source >2018-06-26 09:49:39,117 INFO: (0008 / 0081) >2018-06-26 09:49:39,121 INFO: puppet-congress install type not set to source >2018-06-26 09:49:39,122 INFO: (0009 / 0081) >2018-06-26 09:49:39,125 INFO: puppet-gnocchi install type not set to source >2018-06-26 09:49:39,126 INFO: (0010 / 0081) >2018-06-26 09:49:39,130 INFO: puppet-certmonger install type not set to source >2018-06-26 09:49:39,131 INFO: (0011 / 0081) >2018-06-26 09:49:39,134 INFO: puppet-cinder install type not set to source >2018-06-26 09:49:39,135 INFO: (0012 / 0081) >2018-06-26 09:49:39,139 INFO: puppet-common install type not set to source >2018-06-26 09:49:39,140 INFO: (0013 / 0081) >2018-06-26 09:49:39,144 INFO: puppet-contrail install type not set to source >2018-06-26 09:49:39,144 INFO: (0014 / 0081) >2018-06-26 09:49:39,148 INFO: puppetlabs-concat install type not set to source >2018-06-26 09:49:39,149 INFO: (0015 / 0081) >2018-06-26 09:49:39,153 INFO: puppetlabs-firewall install type not set to source >2018-06-26 09:49:39,154 INFO: (0016 / 0081) >2018-06-26 09:49:39,157 INFO: puppet-glance install type not set to source >2018-06-26 09:49:39,158 INFO: (0017 / 0081) >2018-06-26 09:49:39,162 INFO: puppet-gluster install type not set to source >2018-06-26 09:49:39,163 INFO: (0018 / 0081) >2018-06-26 09:49:39,166 INFO: puppetlabs-haproxy install type not set to source >2018-06-26 09:49:39,167 INFO: (0019 / 0081) >2018-06-26 09:49:39,171 INFO: puppet-heat install type not set to source >2018-06-26 09:49:39,172 INFO: (0020 / 0081) >2018-06-26 09:49:39,175 INFO: puppet-healthcheck install type not set to source >2018-06-26 09:49:39,176 INFO: (0021 / 0081) >2018-06-26 09:49:39,180 INFO: puppet-horizon install type not set to source >2018-06-26 09:49:39,181 INFO: (0022 / 0081) >2018-06-26 09:49:39,184 INFO: puppetlabs-inifile install type not set to source >2018-06-26 09:49:39,185 INFO: (0023 / 0081) >2018-06-26 09:49:39,189 INFO: puppet-kafka install type not set to source >2018-06-26 09:49:39,189 INFO: (0024 / 0081) >2018-06-26 09:49:39,193 INFO: puppet-keystone install type not set to source >2018-06-26 09:49:39,194 INFO: (0025 / 0081) >2018-06-26 09:49:39,198 INFO: puppet-manila install type not set to source >2018-06-26 09:49:39,198 INFO: (0026 / 0081) >2018-06-26 09:49:39,202 INFO: puppet-memcached install type not set to source >2018-06-26 09:49:39,203 INFO: (0027 / 0081) >2018-06-26 09:49:39,207 INFO: puppet-mistral install type not set to source >2018-06-26 09:49:39,208 INFO: (0028 / 0081) >2018-06-26 09:49:39,211 INFO: puppetlabs-mongodb install type not set to source >2018-06-26 09:49:39,212 INFO: (0029 / 0081) >2018-06-26 09:49:39,215 INFO: puppetlabs-mysql install type not set to source >2018-06-26 09:49:39,216 INFO: (0030 / 0081) >2018-06-26 09:49:39,220 INFO: puppet-neutron install type not set to source >2018-06-26 09:49:39,220 INFO: (0031 / 0081) >2018-06-26 09:49:39,224 INFO: puppet-nova install type not set to source >2018-06-26 09:49:39,225 INFO: (0032 / 0081) >2018-06-26 09:49:39,229 INFO: puppet-octavia install type not set to source >2018-06-26 09:49:39,229 INFO: (0033 / 0081) >2018-06-26 09:49:39,233 INFO: puppet-oslo install type not set to source >2018-06-26 09:49:39,234 INFO: (0034 / 0081) >2018-06-26 09:49:39,237 INFO: puppet-nssdb install type not set to source >2018-06-26 09:49:39,238 INFO: (0035 / 0081) >2018-06-26 09:49:39,242 INFO: puppet-opendaylight install type not set to source >2018-06-26 09:49:39,242 INFO: (0036 / 0081) >2018-06-26 09:49:39,246 INFO: puppet-ovn install type not set to source >2018-06-26 09:49:39,247 INFO: (0037 / 0081) >2018-06-26 09:49:39,250 INFO: puppet-panko install type not set to source >2018-06-26 09:49:39,251 INFO: (0038 / 0081) >2018-06-26 09:49:39,254 INFO: puppet-puppet install type not set to source >2018-06-26 09:49:39,255 INFO: (0039 / 0081) >2018-06-26 09:49:39,259 INFO: puppetlabs-rabbitmq install type not set to source >2018-06-26 09:49:39,259 INFO: (0040 / 0081) >2018-06-26 09:49:39,263 INFO: puppet-redis install type not set to source >2018-06-26 09:49:39,264 INFO: (0041 / 0081) >2018-06-26 09:49:39,267 INFO: puppetlabs-rsync install type not set to source >2018-06-26 09:49:39,268 INFO: (0042 / 0081) >2018-06-26 09:49:39,271 INFO: puppet-sahara install type not set to source >2018-06-26 09:49:39,272 INFO: (0043 / 0081) >2018-06-26 09:49:39,276 INFO: sensu-puppet install type not set to source >2018-06-26 09:49:39,277 INFO: (0044 / 0081) >2018-06-26 09:49:39,280 INFO: puppet-tacker install type not set to source >2018-06-26 09:49:39,281 INFO: (0045 / 0081) >2018-06-26 09:49:39,284 INFO: puppet-trove install type not set to source >2018-06-26 09:49:39,285 INFO: (0046 / 0081) >2018-06-26 09:49:39,288 INFO: puppet-ssh install type not set to source >2018-06-26 09:49:39,289 INFO: (0047 / 0081) >2018-06-26 09:49:39,293 INFO: puppet-staging install type not set to source >2018-06-26 09:49:39,293 INFO: (0048 / 0081) >2018-06-26 09:49:39,297 INFO: puppetlabs-stdlib install type not set to source >2018-06-26 09:49:39,297 INFO: (0049 / 0081) >2018-06-26 09:49:39,301 INFO: puppet-swift install type not set to source >2018-06-26 09:49:39,302 INFO: (0050 / 0081) >2018-06-26 09:49:39,305 INFO: puppetlabs-sysctl install type not set to source >2018-06-26 09:49:39,306 INFO: (0051 / 0081) >2018-06-26 09:49:39,310 INFO: puppet-timezone install type not set to source >2018-06-26 09:49:39,310 INFO: (0052 / 0081) >2018-06-26 09:49:39,314 INFO: puppet-uchiwa install type not set to source >2018-06-26 09:49:39,314 INFO: (0053 / 0081) >2018-06-26 09:49:39,318 INFO: puppetlabs-vcsrepo install type not set to source >2018-06-26 09:49:39,319 INFO: (0054 / 0081) >2018-06-26 09:49:39,322 INFO: puppet-vlan install type not set to source >2018-06-26 09:49:39,323 INFO: (0055 / 0081) >2018-06-26 09:49:39,327 INFO: puppet-vswitch install type not set to source >2018-06-26 09:49:39,327 INFO: (0056 / 0081) >2018-06-26 09:49:39,331 INFO: puppetlabs-xinetd install type not set to source >2018-06-26 09:49:39,331 INFO: (0057 / 0081) >2018-06-26 09:49:39,335 INFO: puppet-zookeeper install type not set to source >2018-06-26 09:49:39,336 INFO: (0058 / 0081) >2018-06-26 09:49:39,339 INFO: puppet-openstacklib install type not set to source >2018-06-26 09:49:39,340 INFO: (0059 / 0081) >2018-06-26 09:49:39,344 INFO: puppet-module-keepalived install type not set to source >2018-06-26 09:49:39,344 INFO: (0060 / 0081) >2018-06-26 09:49:39,348 INFO: puppetlabs-ntp install type not set to source >2018-06-26 09:49:39,349 INFO: (0061 / 0081) >2018-06-26 09:49:39,352 INFO: puppet-snmp install type not set to source >2018-06-26 09:49:39,353 INFO: (0062 / 0081) >2018-06-26 09:49:39,357 INFO: puppet-tripleo install type not set to source >2018-06-26 09:49:39,358 INFO: (0063 / 0081) >2018-06-26 09:49:39,361 INFO: puppet-ironic install type not set to source >2018-06-26 09:49:39,362 INFO: (0064 / 0081) >2018-06-26 09:49:39,365 INFO: puppet-ipaclient install type not set to source >2018-06-26 09:49:39,366 INFO: (0065 / 0081) >2018-06-26 09:49:39,369 INFO: puppetlabs-corosync install type not set to source >2018-06-26 09:49:39,370 INFO: (0066 / 0081) >2018-06-26 09:49:39,373 INFO: puppet-pacemaker install type not set to source >2018-06-26 09:49:39,374 INFO: (0067 / 0081) >2018-06-26 09:49:39,378 INFO: puppet_aviator install type not set to source >2018-06-26 09:49:39,378 INFO: (0068 / 0081) >2018-06-26 09:49:39,382 INFO: puppet-openstack_extras install type not set to source >2018-06-26 09:49:39,382 INFO: (0069 / 0081) >2018-06-26 09:49:39,386 INFO: konstantin-fluentd install type not set to source >2018-06-26 09:49:39,387 INFO: (0070 / 0081) >2018-06-26 09:49:39,390 INFO: puppet-elasticsearch install type not set to source >2018-06-26 09:49:39,391 INFO: (0071 / 0081) >2018-06-26 09:49:39,394 INFO: puppet-kibana3 install type not set to source >2018-06-26 09:49:39,395 INFO: (0072 / 0081) >2018-06-26 09:49:39,398 INFO: puppetlabs-git install type not set to source >2018-06-26 09:49:39,399 INFO: (0073 / 0081) >2018-06-26 09:49:39,402 INFO: puppet-datacat install type not set to source >2018-06-26 09:49:39,403 INFO: (0074 / 0081) >2018-06-26 09:49:39,407 INFO: puppet-kmod install type not set to source >2018-06-26 09:49:39,408 INFO: (0075 / 0081) >2018-06-26 09:49:39,411 INFO: puppet-zaqar install type not set to source >2018-06-26 09:49:39,412 INFO: (0076 / 0081) >2018-06-26 09:49:39,415 INFO: puppet-ec2api install type not set to source >2018-06-26 09:49:39,416 INFO: (0077 / 0081) >2018-06-26 09:49:39,419 INFO: puppet-qdr install type not set to source >2018-06-26 09:49:39,420 INFO: (0078 / 0081) >2018-06-26 09:49:39,423 INFO: puppet-systemd install type not set to source >2018-06-26 09:49:39,424 INFO: (0079 / 0081) >2018-06-26 09:49:39,428 INFO: puppet-etcd install type not set to source >2018-06-26 09:49:39,428 INFO: (0080 / 0081) >2018-06-26 09:49:39,432 INFO: puppet-veritas_hyperscale install type not set to source >2018-06-26 09:49:39,433 INFO: (0081 / 0081) >2018-06-26 09:49:39,436 INFO: puppet-ptp install type not set to source >2018-06-26 09:49:39,438 INFO: dib-run-parts 98-source-repositories completed >2018-06-26 09:49:39,438 INFO: dib-run-parts Running /tmp/tmpTGWWp9/extra-data.d/99-enable-install-types >2018-06-26 09:49:39,440 INFO: + set -eu >2018-06-26 09:49:39,441 INFO: + set -o pipefail >2018-06-26 09:49:39,441 INFO: + declare -a SPECIFIED_ELEMS >2018-06-26 09:49:39,441 INFO: + SPECIFIED_ELEMS[0]= >2018-06-26 09:49:39,441 INFO: + PREFIX=DIB_INSTALLTYPE_ >2018-06-26 09:49:39,441 INFO: ++ env >2018-06-26 09:49:39,441 INFO: ++ grep '^DIB_INSTALLTYPE_' >2018-06-26 09:49:39,442 INFO: ++ cut -d= -f1 >2018-06-26 09:49:39,443 INFO: ++ echo '' >2018-06-26 09:49:39,443 INFO: + INSTALL_TYPE_VARS= >2018-06-26 09:49:39,443 INFO: ++ find /tmp/tmpTGWWp9/install.d -maxdepth 1 -name '*-package-install' -type d >2018-06-26 09:49:39,445 INFO: + default_install_type_dirs=/tmp/tmpTGWWp9/install.d/puppet-modules-package-install >2018-06-26 09:49:39,445 INFO: + for _install_dir in '$default_install_type_dirs' >2018-06-26 09:49:39,445 INFO: + SUFFIX=-package-install >2018-06-26 09:49:39,445 INFO: ++ basename /tmp/tmpTGWWp9/install.d/puppet-modules-package-install >2018-06-26 09:49:39,446 INFO: + _install_dir=puppet-modules-package-install >2018-06-26 09:49:39,446 INFO: + INSTALLDIRPREFIX=puppet-modules >2018-06-26 09:49:39,446 INFO: + found=0 >2018-06-26 09:49:39,446 INFO: + '[' 0 = 0 ']' >2018-06-26 09:49:39,446 INFO: + pushd /tmp/tmpTGWWp9/install.d >2018-06-26 09:49:39,446 INFO: /tmp/tmpTGWWp9/install.d /home/sudheer >2018-06-26 09:49:39,447 INFO: + ln -sf puppet-modules-package-install/75-puppet-modules-package . >2018-06-26 09:49:39,447 INFO: + popd >2018-06-26 09:49:39,447 INFO: /home/sudheer >2018-06-26 09:49:39,448 INFO: dib-run-parts 99-enable-install-types completed >2018-06-26 09:49:39,448 INFO: dib-run-parts ----------------------- PROFILING ----------------------- >2018-06-26 09:49:39,449 INFO: dib-run-parts >2018-06-26 09:49:39,450 INFO: dib-run-parts Target: extra-data.d >2018-06-26 09:49:39,450 INFO: dib-run-parts >2018-06-26 09:49:39,450 INFO: dib-run-parts Script Seconds >2018-06-26 09:49:39,450 INFO: dib-run-parts --------------------------------------- ---------- >2018-06-26 09:49:39,450 INFO: dib-run-parts >2018-06-26 09:49:39,456 INFO: dib-run-parts 10-install-git 4.641 >2018-06-26 09:49:39,460 INFO: dib-run-parts 20-manifest-dir 0.018 >2018-06-26 09:49:39,464 INFO: dib-run-parts 75-inject-element-manifest 0.032 >2018-06-26 09:49:39,468 INFO: dib-run-parts 98-source-repositories 0.368 >2018-06-26 09:49:39,472 INFO: dib-run-parts 99-enable-install-types 0.009 >2018-06-26 09:49:39,473 INFO: dib-run-parts >2018-06-26 09:49:39,473 INFO: dib-run-parts --------------------- END PROFILING --------------------- >2018-06-26 09:49:39,474 INFO: INFO: 2018-06-26 09:49:39,473 -- ############### End stdout/stderr logging ############### >2018-06-26 09:49:39,474 INFO: INFO: 2018-06-26 09:49:39,474 -- Running hook pre-install >2018-06-26 09:49:39,474 INFO: INFO: 2018-06-26 09:49:39,474 -- Skipping hook pre-install, the hook directory doesn't exist at /tmp/tmpTGWWp9/pre-install.d >2018-06-26 09:49:39,474 INFO: INFO: 2018-06-26 09:49:39,474 -- Running hook install >2018-06-26 09:49:39,474 INFO: INFO: 2018-06-26 09:49:39,474 -- ############### Begin stdout/stderr logging ############### >2018-06-26 09:49:39,484 INFO: dib-run-parts Sourcing environment file /tmp/tmpTGWWp9/install.d/../environment.d/00-dib-v2-env >2018-06-26 09:49:39,486 INFO: + source /tmp/tmpTGWWp9/install.d/../environment.d/00-dib-v2-env >2018-06-26 09:49:39,486 INFO: ++ export 'IMAGE_ELEMENT=undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs' >2018-06-26 09:49:39,486 INFO: ++ IMAGE_ELEMENT='undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs' >2018-06-26 09:49:39,486 INFO: ++ export 'IMAGE_ELEMENT_YAML={cache-url: /usr/share/diskimage-builder/elements/cache-url, dib-python: /usr/share/diskimage-builder/elements/dib-python, >2018-06-26 09:49:39,487 INFO: element-manifest: /usr/share/diskimage-builder/elements/element-manifest, enable-packages-install: /usr/share/tripleo-image-elements/enable-packages-install, >2018-06-26 09:49:39,487 INFO: hiera: /usr/share/tripleo-puppet-elements/hiera, install-bin: /usr/share/diskimage-builder/elements/install-bin, >2018-06-26 09:49:39,487 INFO: install-types: /usr/share/diskimage-builder/elements/install-types, manifests: /usr/share/diskimage-builder/elements/manifests, >2018-06-26 09:49:39,487 INFO: os-apply-config: /usr/share/tripleo-image-elements/os-apply-config, os-refresh-config: /usr/share/tripleo-image-elements/os-refresh-config, >2018-06-26 09:49:39,487 INFO: package-installs: /usr/share/diskimage-builder/elements/package-installs, pip-and-virtualenv: /usr/share/diskimage-builder/elements/pip-and-virtualenv, >2018-06-26 09:49:39,487 INFO: pip-manifest: /usr/share/tripleo-image-elements/pip-manifest, pkg-map: /usr/share/diskimage-builder/elements/pkg-map, >2018-06-26 09:49:39,488 INFO: puppet: /usr/share/tripleo-puppet-elements/puppet, puppet-modules: /usr/share/tripleo-puppet-elements/puppet-modules, >2018-06-26 09:49:39,488 INFO: puppet-stack-config: /usr/share/instack-undercloud/puppet-stack-config, source-repositories: /usr/share/diskimage-builder/elements/source-repositories, >2018-06-26 09:49:39,488 INFO: undercloud-install: /usr/share/instack-undercloud/undercloud-install} >2018-06-26 09:49:39,488 INFO: ' >2018-06-26 09:49:39,488 INFO: ++ IMAGE_ELEMENT_YAML='{cache-url: /usr/share/diskimage-builder/elements/cache-url, dib-python: /usr/share/diskimage-builder/elements/dib-python, >2018-06-26 09:49:39,488 INFO: element-manifest: /usr/share/diskimage-builder/elements/element-manifest, enable-packages-install: /usr/share/tripleo-image-elements/enable-packages-install, >2018-06-26 09:49:39,488 INFO: hiera: /usr/share/tripleo-puppet-elements/hiera, install-bin: /usr/share/diskimage-builder/elements/install-bin, >2018-06-26 09:49:39,489 INFO: install-types: /usr/share/diskimage-builder/elements/install-types, manifests: /usr/share/diskimage-builder/elements/manifests, >2018-06-26 09:49:39,489 INFO: os-apply-config: /usr/share/tripleo-image-elements/os-apply-config, os-refresh-config: /usr/share/tripleo-image-elements/os-refresh-config, >2018-06-26 09:49:39,489 INFO: package-installs: /usr/share/diskimage-builder/elements/package-installs, pip-and-virtualenv: /usr/share/diskimage-builder/elements/pip-and-virtualenv, >2018-06-26 09:49:39,489 INFO: pip-manifest: /usr/share/tripleo-image-elements/pip-manifest, pkg-map: /usr/share/diskimage-builder/elements/pkg-map, >2018-06-26 09:49:39,489 INFO: puppet: /usr/share/tripleo-puppet-elements/puppet, puppet-modules: /usr/share/tripleo-puppet-elements/puppet-modules, >2018-06-26 09:49:39,489 INFO: puppet-stack-config: /usr/share/instack-undercloud/puppet-stack-config, source-repositories: /usr/share/diskimage-builder/elements/source-repositories, >2018-06-26 09:49:39,490 INFO: undercloud-install: /usr/share/instack-undercloud/undercloud-install} >2018-06-26 09:49:39,490 INFO: ' >2018-06-26 09:49:39,490 INFO: ++ export -f get_image_element_array >2018-06-26 09:49:39,490 INFO: + set +o xtrace >2018-06-26 09:49:39,490 INFO: dib-run-parts Sourcing environment file /tmp/tmpTGWWp9/install.d/../environment.d/01-export-install-types.bash >2018-06-26 09:49:39,490 INFO: + source /tmp/tmpTGWWp9/install.d/../environment.d/01-export-install-types.bash >2018-06-26 09:49:39,490 INFO: ++ export DIB_DEFAULT_INSTALLTYPE=package >2018-06-26 09:49:39,490 INFO: ++ DIB_DEFAULT_INSTALLTYPE=package >2018-06-26 09:49:39,490 INFO: + set +o xtrace >2018-06-26 09:49:39,491 INFO: dib-run-parts Sourcing environment file /tmp/tmpTGWWp9/install.d/../environment.d/01-puppet-module-pins.sh >2018-06-26 09:49:39,491 INFO: + source /tmp/tmpTGWWp9/install.d/../environment.d/01-puppet-module-pins.sh >2018-06-26 09:49:39,491 INFO: ++ export DIB_REPOREF_puppetlabs_ntp=4.2.x >2018-06-26 09:49:39,491 INFO: ++ DIB_REPOREF_puppetlabs_ntp=4.2.x >2018-06-26 09:49:39,491 INFO: + set +o xtrace >2018-06-26 09:49:39,491 INFO: dib-run-parts Sourcing environment file /tmp/tmpTGWWp9/install.d/../environment.d/02-puppet-modules-install-types.sh >2018-06-26 09:49:39,492 INFO: + source /tmp/tmpTGWWp9/install.d/../environment.d/02-puppet-modules-install-types.sh >2018-06-26 09:49:39,492 INFO: ++ DIB_DEFAULT_INSTALLTYPE=package >2018-06-26 09:49:39,492 INFO: ++ DIB_INSTALLTYPE_puppet_modules=package >2018-06-26 09:49:39,492 INFO: ++ '[' package = source ']' >2018-06-26 09:49:39,492 INFO: + set +o xtrace >2018-06-26 09:49:39,493 INFO: dib-run-parts Sourcing environment file /tmp/tmpTGWWp9/install.d/../environment.d/10-os-apply-config-venv-dir.bash >2018-06-26 09:49:39,494 INFO: + source /tmp/tmpTGWWp9/install.d/../environment.d/10-os-apply-config-venv-dir.bash >2018-06-26 09:49:39,494 INFO: ++ '[' -z '' ']' >2018-06-26 09:49:39,494 INFO: ++ export OS_APPLY_CONFIG_VENV_DIR=/opt/stack/venvs/os-apply-config >2018-06-26 09:49:39,494 INFO: ++ OS_APPLY_CONFIG_VENV_DIR=/opt/stack/venvs/os-apply-config >2018-06-26 09:49:39,494 INFO: + set +o xtrace >2018-06-26 09:49:39,495 INFO: dib-run-parts Sourcing environment file /tmp/tmpTGWWp9/install.d/../environment.d/14-manifests >2018-06-26 09:49:39,496 INFO: + source /tmp/tmpTGWWp9/install.d/../environment.d/14-manifests >2018-06-26 09:49:39,496 INFO: ++ export DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests >2018-06-26 09:49:39,496 INFO: ++ DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests >2018-06-26 09:49:39,496 INFO: ++ export DIB_MANIFEST_SAVE_DIR=instack.d/ >2018-06-26 09:49:39,496 INFO: ++ DIB_MANIFEST_SAVE_DIR=instack.d/ >2018-06-26 09:49:39,496 INFO: + set +o xtrace >2018-06-26 09:49:39,497 INFO: dib-run-parts Running /tmp/tmpTGWWp9/install.d/02-puppet-stack-config >2018-06-26 09:49:40,216 INFO: dib-run-parts 02-puppet-stack-config completed >2018-06-26 09:49:40,217 INFO: dib-run-parts Running /tmp/tmpTGWWp9/install.d/10-hiera-yaml-symlink >2018-06-26 09:49:40,220 INFO: + set -o pipefail >2018-06-26 09:49:40,220 INFO: + ln -f -s /etc/puppet/hiera.yaml /etc/hiera.yaml >2018-06-26 09:49:40,222 INFO: dib-run-parts 10-hiera-yaml-symlink completed >2018-06-26 09:49:40,223 INFO: dib-run-parts Running /tmp/tmpTGWWp9/install.d/10-puppet-stack-config-puppet-module >2018-06-26 09:49:40,225 INFO: + set -o pipefail >2018-06-26 09:49:40,225 INFO: + mkdir -p /etc/puppet/manifests >2018-06-26 09:49:40,227 INFO: ++ dirname /tmp/tmpTGWWp9/install.d/10-puppet-stack-config-puppet-module >2018-06-26 09:49:40,228 INFO: + cp /tmp/tmpTGWWp9/install.d/../puppet-stack-config.pp /etc/puppet/manifests/puppet-stack-config.pp >2018-06-26 09:49:40,231 INFO: dib-run-parts 10-puppet-stack-config-puppet-module completed >2018-06-26 09:49:40,231 INFO: dib-run-parts Running /tmp/tmpTGWWp9/install.d/11-create-template-root >2018-06-26 09:49:40,234 INFO: ++ os-apply-config --print-templates >2018-06-26 09:49:40,391 INFO: + TEMPLATE_ROOT=/usr/libexec/os-apply-config/templates >2018-06-26 09:49:40,392 INFO: + mkdir -p /usr/libexec/os-apply-config/templates >2018-06-26 09:49:40,394 INFO: dib-run-parts 11-create-template-root completed >2018-06-26 09:49:40,394 INFO: dib-run-parts Running /tmp/tmpTGWWp9/install.d/11-hiera-orc-install >2018-06-26 09:49:40,397 INFO: + set -o pipefail >2018-06-26 09:49:40,397 INFO: + mkdir -p /usr/libexec/os-refresh-config/configure.d/ >2018-06-26 09:49:40,399 INFO: ++ dirname /tmp/tmpTGWWp9/install.d/11-hiera-orc-install >2018-06-26 09:49:40,400 INFO: + install -m 0755 -o root -g root /tmp/tmpTGWWp9/install.d/../10-hiera-disable /usr/libexec/os-refresh-config/configure.d/10-hiera-disable >2018-06-26 09:49:40,407 INFO: ++ dirname /tmp/tmpTGWWp9/install.d/11-hiera-orc-install >2018-06-26 09:49:40,408 INFO: + install -m 0755 -o root -g root /tmp/tmpTGWWp9/install.d/../40-hiera-datafiles /usr/libexec/os-refresh-config/configure.d/40-hiera-datafiles >2018-06-26 09:49:40,413 INFO: dib-run-parts 11-hiera-orc-install completed >2018-06-26 09:49:40,413 INFO: dib-run-parts Running /tmp/tmpTGWWp9/install.d/75-puppet-modules-package >2018-06-26 09:49:40,416 INFO: + find /opt/stack/puppet-modules/ -mindepth 1 >2018-06-26 09:49:40,416 INFO: + read >2018-06-26 09:49:40,420 INFO: + ln -f -s /usr/share/openstack-puppet/modules/aodh /usr/share/openstack-puppet/modules/apache /usr/share/openstack-puppet/modules/archive /usr/share/openstack-puppet/modules/auditd /usr/share/openstack-puppet/modules/barbican /usr/share/openstack-puppet/modules/cassandra /usr/share/openstack-puppet/modules/ceilometer /usr/share/openstack-puppet/modules/ceph /usr/share/openstack-puppet/modules/certmonger /usr/share/openstack-puppet/modules/cinder /usr/share/openstack-puppet/modules/collectd /usr/share/openstack-puppet/modules/concat /usr/share/openstack-puppet/modules/contrail /usr/share/openstack-puppet/modules/corosync /usr/share/openstack-puppet/modules/datacat /usr/share/openstack-puppet/modules/designate /usr/share/openstack-puppet/modules/dns /usr/share/openstack-puppet/modules/ec2api /usr/share/openstack-puppet/modules/elasticsearch /usr/share/openstack-puppet/modules/fdio /usr/share/openstack-puppet/modules/firewall /usr/share/openstack-puppet/modules/fluentd /usr/share/openstack-puppet/modules/git /usr/share/openstack-puppet/modules/glance /usr/share/openstack-puppet/modules/gnocchi /usr/share/openstack-puppet/modules/haproxy /usr/share/openstack-puppet/modules/heat /usr/share/openstack-puppet/modules/horizon /usr/share/openstack-puppet/modules/inifile /usr/share/openstack-puppet/modules/ipaclient /usr/share/openstack-puppet/modules/ironic /usr/share/openstack-puppet/modules/java /usr/share/openstack-puppet/modules/kafka /usr/share/openstack-puppet/modules/keepalived /usr/share/openstack-puppet/modules/keystone /usr/share/openstack-puppet/modules/kibana3 /usr/share/openstack-puppet/modules/kmod /usr/share/openstack-puppet/modules/manila /usr/share/openstack-puppet/modules/memcached /usr/share/openstack-puppet/modules/midonet /usr/share/openstack-puppet/modules/mistral /usr/share/openstack-puppet/modules/module-data /usr/share/openstack-puppet/modules/mysql /usr/share/openstack-puppet/modules/n1k_vsm /usr/share/openstack-puppet/modules/neutron /usr/share/openstack-puppet/modules/nova /usr/share/openstack-puppet/modules/nssdb /usr/share/openstack-puppet/modules/ntp /usr/share/openstack-puppet/modules/octavia /usr/share/openstack-puppet/modules/opendaylight /usr/share/openstack-puppet/modules/openstack_extras /usr/share/openstack-puppet/modules/openstacklib /usr/share/openstack-puppet/modules/oslo /usr/share/openstack-puppet/modules/ovn /usr/share/openstack-puppet/modules/pacemaker /usr/share/openstack-puppet/modules/panko /usr/share/openstack-puppet/modules/rabbitmq /usr/share/openstack-puppet/modules/redis /usr/share/openstack-puppet/modules/remote /usr/share/openstack-puppet/modules/rsync /usr/share/openstack-puppet/modules/sahara /usr/share/openstack-puppet/modules/sensu /usr/share/openstack-puppet/modules/snmp /usr/share/openstack-puppet/modules/ssh /usr/share/openstack-puppet/modules/staging /usr/share/openstack-puppet/modules/stdlib /usr/share/openstack-puppet/modules/swift /usr/share/openstack-puppet/modules/sysctl /usr/share/openstack-puppet/modules/systemd /usr/share/openstack-puppet/modules/timezone /usr/share/openstack-puppet/modules/tomcat /usr/share/openstack-puppet/modules/tripleo /usr/share/openstack-puppet/modules/trove /usr/share/openstack-puppet/modules/uchiwa /usr/share/openstack-puppet/modules/vcsrepo /usr/share/openstack-puppet/modules/veritas_hyperscale /usr/share/openstack-puppet/modules/vswitch /usr/share/openstack-puppet/modules/xinetd /usr/share/openstack-puppet/modules/zaqar /usr/share/openstack-puppet/modules/zookeeper /etc/puppet/modules/ >2018-06-26 09:49:40,423 INFO: dib-run-parts 75-puppet-modules-package completed >2018-06-26 09:49:40,424 INFO: dib-run-parts Running /tmp/tmpTGWWp9/install.d/99-install-config-templates >2018-06-26 09:49:40,427 INFO: ++ os-apply-config --print-templates >2018-06-26 09:49:40,579 INFO: + TEMPLATE_ROOT=/usr/libexec/os-apply-config/templates >2018-06-26 09:49:40,579 INFO: ++ dirname /tmp/tmpTGWWp9/install.d/99-install-config-templates >2018-06-26 09:49:40,580 INFO: + TEMPLATE_SOURCE=/tmp/tmpTGWWp9/install.d/../os-apply-config >2018-06-26 09:49:40,580 INFO: + mkdir -p /usr/libexec/os-apply-config/templates >2018-06-26 09:49:40,581 INFO: + '[' -d /tmp/tmpTGWWp9/install.d/../os-apply-config ']' >2018-06-26 09:49:40,581 INFO: + rsync '--exclude=.*.swp' -Cr /tmp/tmpTGWWp9/install.d/../os-apply-config/ /usr/libexec/os-apply-config/templates/ >2018-06-26 09:49:40,587 INFO: dib-run-parts 99-install-config-templates completed >2018-06-26 09:49:40,587 INFO: dib-run-parts Running /tmp/tmpTGWWp9/install.d/99-os-refresh-config-install-scripts >2018-06-26 09:49:40,590 INFO: ++ os-refresh-config --print-base >2018-06-26 09:49:40,640 INFO: + SCRIPT_BASE=/usr/libexec/os-refresh-config >2018-06-26 09:49:40,641 INFO: ++ dirname /tmp/tmpTGWWp9/install.d/99-os-refresh-config-install-scripts >2018-06-26 09:49:40,642 INFO: + SCRIPT_SOURCE=/tmp/tmpTGWWp9/install.d/../os-refresh-config >2018-06-26 09:49:40,642 INFO: + rsync -r /tmp/tmpTGWWp9/install.d/../os-refresh-config/ /usr/libexec/os-refresh-config/ >2018-06-26 09:49:40,647 INFO: dib-run-parts 99-os-refresh-config-install-scripts completed >2018-06-26 09:49:40,647 INFO: dib-run-parts ----------------------- PROFILING ----------------------- >2018-06-26 09:49:40,647 INFO: dib-run-parts >2018-06-26 09:49:40,648 INFO: dib-run-parts Target: install.d >2018-06-26 09:49:40,649 INFO: dib-run-parts >2018-06-26 09:49:40,649 INFO: dib-run-parts Script Seconds >2018-06-26 09:49:40,649 INFO: dib-run-parts --------------------------------------- ---------- >2018-06-26 09:49:40,649 INFO: dib-run-parts >2018-06-26 09:49:40,656 INFO: dib-run-parts 02-puppet-stack-config 0.719 >2018-06-26 09:49:40,660 INFO: dib-run-parts 10-hiera-yaml-symlink 0.005 >2018-06-26 09:49:40,665 INFO: dib-run-parts 10-puppet-stack-config-puppet-module 0.007 >2018-06-26 09:49:40,669 INFO: dib-run-parts 11-create-template-root 0.162 >2018-06-26 09:49:40,673 INFO: dib-run-parts 11-hiera-orc-install 0.018 >2018-06-26 09:49:40,677 INFO: dib-run-parts 75-puppet-modules-package 0.009 >2018-06-26 09:49:40,682 INFO: dib-run-parts 99-install-config-templates 0.162 >2018-06-26 09:49:40,686 INFO: dib-run-parts 99-os-refresh-config-install-scripts 0.059 >2018-06-26 09:49:40,687 INFO: dib-run-parts >2018-06-26 09:49:40,688 INFO: dib-run-parts --------------------- END PROFILING --------------------- >2018-06-26 09:49:40,688 INFO: INFO: 2018-06-26 09:49:40,688 -- ############### End stdout/stderr logging ############### >2018-06-26 09:49:40,688 INFO: INFO: 2018-06-26 09:49:40,688 -- Running hook post-install >2018-06-26 09:49:40,688 INFO: INFO: 2018-06-26 09:49:40,688 -- Skipping hook post-install, the hook directory doesn't exist at /tmp/tmpTGWWp9/post-install.d >2018-06-26 09:49:40,690 INFO: INFO: 2018-06-26 09:49:40,690 -- Ending run of instack. >2018-06-26 09:49:40,702 INFO: Instack completed successfully >2018-06-26 09:49:40,702 INFO: Running os-refresh-config >2018-06-26 09:49:40,764 INFO: [2018-06-26 09:49:40,764] (os-refresh-config) [INFO] Starting phase configure >2018-06-26 09:49:40,772 INFO: dib-run-parts Tue Jun 26 09:49:40 IST 2018 Running /usr/libexec/os-refresh-config/configure.d/10-hiera-disable >2018-06-26 09:49:40,774 INFO: + '[' -f /etc/puppet/hiera.yaml ']' >2018-06-26 09:49:40,774 INFO: + grep yaml /etc/puppet/hiera.yaml >2018-06-26 09:49:40,778 INFO: dib-run-parts Tue Jun 26 09:49:40 IST 2018 10-hiera-disable completed >2018-06-26 09:49:40,779 INFO: dib-run-parts Tue Jun 26 09:49:40 IST 2018 Running /usr/libexec/os-refresh-config/configure.d/20-os-apply-config >2018-06-26 09:49:40,924 INFO: [2018/06/26 09:49:40 AM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json >2018-06-26 09:49:40,928 INFO: [2018/06/26 09:49:40 AM] [INFO] writing /etc/os-net-config/config.json >2018-06-26 09:49:40,928 INFO: [2018/06/26 09:49:40 AM] [INFO] writing /root/stackrc >2018-06-26 09:49:40,929 INFO: [2018/06/26 09:49:40 AM] [INFO] writing /var/run/heat-config/heat-config >2018-06-26 09:49:40,930 INFO: [2018/06/26 09:49:40 AM] [INFO] writing /etc/puppet/hiera.yaml >2018-06-26 09:49:40,930 INFO: [2018/06/26 09:49:40 AM] [INFO] writing /var/opt/undercloud-stack/masquerade >2018-06-26 09:49:40,931 INFO: [2018/06/26 09:49:40 AM] [INFO] writing /etc/puppet/hieradata/RedHat.yaml >2018-06-26 09:49:40,931 INFO: [2018/06/26 09:49:40 AM] [INFO] writing /etc/puppet/hieradata/CentOS.yaml >2018-06-26 09:49:40,931 INFO: [2018/06/26 09:49:40 AM] [INFO] writing /root/tripleo-undercloud-passwords >2018-06-26 09:49:40,932 INFO: [2018/06/26 09:49:40 AM] [INFO] writing /etc/os-collect-config.conf >2018-06-26 09:49:40,932 INFO: [2018/06/26 09:49:40 AM] [INFO] success >2018-06-26 09:49:40,938 INFO: dib-run-parts Tue Jun 26 09:49:40 IST 2018 20-os-apply-config completed >2018-06-26 09:49:40,939 INFO: dib-run-parts Tue Jun 26 09:49:40 IST 2018 Running /usr/libexec/os-refresh-config/configure.d/30-reload-keepalived >2018-06-26 09:49:40,941 INFO: + systemctl is-enabled keepalived >2018-06-26 09:49:40,949 INFO: disabled >2018-06-26 09:49:40,952 INFO: dib-run-parts Tue Jun 26 09:49:40 IST 2018 30-reload-keepalived completed >2018-06-26 09:49:40,953 INFO: dib-run-parts Tue Jun 26 09:49:40 IST 2018 Running /usr/libexec/os-refresh-config/configure.d/40-hiera-datafiles >2018-06-26 09:49:41,098 INFO: [2018/06/26 09:49:41 AM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json >2018-06-26 09:49:41,110 INFO: dib-run-parts Tue Jun 26 09:49:41 IST 2018 40-hiera-datafiles completed >2018-06-26 09:49:41,111 INFO: dib-run-parts Tue Jun 26 09:49:41 IST 2018 Running /usr/libexec/os-refresh-config/configure.d/50-puppet-stack-config >2018-06-26 09:49:41,113 INFO: + set -o pipefail >2018-06-26 09:49:41,113 INFO: + puppet_apply puppet apply --summarize --detailed-exitcodes /etc/puppet/manifests/puppet-stack-config.pp >2018-06-26 09:49:41,113 INFO: + set +e >2018-06-26 09:49:41,114 INFO: + puppet apply --summarize --detailed-exitcodes /etc/puppet/manifests/puppet-stack-config.pp >2018-06-26 09:49:46,520 INFO: [mNotice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend[0m >2018-06-26 09:49:46,621 INFO: [1;33mWarning: ModuleLoader: module 'openstacklib' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:49:46,621 INFO: (file & line not available)[0m >2018-06-26 09:49:46,863 INFO: [mNotice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend[0m >2018-06-26 09:49:46,918 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 09:49:46,919 INFO: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 54]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-06-26 09:49:46,919 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:49:46,922 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 09:49:46,922 INFO: with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 55]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-06-26 09:49:46,922 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:49:46,968 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 09:49:46,968 INFO: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 56]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-06-26 09:49:46,968 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:49:46,981 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 09:49:46,981 INFO: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 66]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-06-26 09:49:46,982 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:49:46,984 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 09:49:46,984 INFO: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 68]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-06-26 09:49:46,984 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:49:46,997 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 09:49:46,997 INFO: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 89]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-06-26 09:49:46,998 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:49:47,241 INFO: [1;33mWarning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/rabbitmq/manifests/install/rabbitmqadmin.pp", 37]:["/etc/puppet/modules/rabbitmq/manifests/init.pp", 316] >2018-06-26 09:49:47,241 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:49:47,372 INFO: [mNotice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.[0m >2018-06-26 09:49:47,560 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 09:49:47,560 INFO: with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp", 97]:["/etc/puppet/manifests/puppet-stack-config.pp", 91] >2018-06-26 09:49:47,560 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:49:47,596 INFO: [1;33mWarning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:49:47,596 INFO: (file & line not available)[0m >2018-06-26 09:49:47,788 INFO: [1;33mWarning: ModuleLoader: module 'keystone' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:49:47,788 INFO: (file & line not available)[0m >2018-06-26 09:49:48,174 INFO: [1;33mWarning: ModuleLoader: module 'glance' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:49:48,175 INFO: (file & line not available)[0m >2018-06-26 09:49:48,324 INFO: [1;33mWarning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:49:48,325 INFO: (file & line not available)[0m >2018-06-26 09:49:48,423 INFO: [1;33mWarning: Unknown variable: '::nova::db::mysql_api::setup_cell0'. at /etc/puppet/modules/nova/manifests/db/mysql.pp:53:28[0m >2018-06-26 09:49:48,452 INFO: [1;33mWarning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:49:48,452 INFO: (file & line not available)[0m >2018-06-26 09:49:48,930 INFO: [1;33mWarning: ModuleLoader: module 'heat' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:49:48,930 INFO: (file & line not available)[0m >2018-06-26 09:49:48,972 INFO: [1;33mWarning: ModuleLoader: module 'ironic' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:49:48,972 INFO: (file & line not available)[0m >2018-06-26 09:49:49,080 INFO: [1;33mWarning: ModuleLoader: module 'swift' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:49:49,081 INFO: (file & line not available)[0m >2018-06-26 09:49:49,319 INFO: [1;33mWarning: Scope(Class[Keystone]): keystone::rabbit_host, keystone::rabbit_hosts, keystone::rabbit_password, keystone::rabbit_port, keystone::rabbit_userid and keystone::rabbit_virtual_host are deprecated. Please use keystone::default_transport_url instead.[0m >2018-06-26 09:49:50,483 INFO: [1;33mWarning: Scope(Class[Glance::Notify::Rabbitmq]): glance::notify::rabbitmq::rabbit_host, glance::notify::rabbitmq::rabbit_hosts, glance::notify::rabbitmq::rabbit_password, glance::notify::rabbitmq::rabbit_port, glance::notify::rabbitmq::rabbit_userid and glance::notify::rabbitmq::rabbit_virtual_host are deprecated. Please use glance::notify::rabbitmq::default_transport_url instead.[0m >2018-06-26 09:49:50,554 INFO: [1;33mWarning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release[0m >2018-06-26 09:49:50,554 INFO: [1;33mWarning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release[0m >2018-06-26 09:49:50,791 INFO: [1;33mWarning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:49:50,791 INFO: (file & line not available)[0m >2018-06-26 09:49:51,023 INFO: [1;33mWarning: Unknown variable: 'until_complete_real'. at /etc/puppet/modules/nova/manifests/cron/archive_deleted_rows.pp:77:82[0m >2018-06-26 09:49:51,057 INFO: [1;33mWarning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/nova/manifests/scheduler/filter.pp", 140]:["/etc/puppet/manifests/puppet-stack-config.pp", 389] >2018-06-26 09:49:51,058 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:49:51,234 INFO: [1;33mWarning: Scope(Class[Neutron]): neutron::rabbit_host, neutron::rabbit_hosts, neutron::rabbit_password, neutron::rabbit_port, neutron::rabbit_user, neutron::rabbit_virtual_host and neutron::rpc_backend are deprecated. Please use neutron::default_transport_url instead.[0m >2018-06-26 09:49:52,102 INFO: [1;33mWarning: Unknown variable: 'methods_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:100:56[0m >2018-06-26 09:49:52,102 INFO: [1;33mWarning: Unknown variable: 'incoming_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:101:56[0m >2018-06-26 09:49:52,102 INFO: [1;33mWarning: Unknown variable: 'incoming_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:102:56[0m >2018-06-26 09:49:52,103 INFO: [1;33mWarning: Unknown variable: 'outgoing_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:103:56[0m >2018-06-26 09:49:52,103 INFO: [1;33mWarning: Unknown variable: 'outgoing_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:104:56[0m >2018-06-26 09:49:52,158 INFO: [1;33mWarning: Scope(Class[Swift::Storage::All]): The default port for the object storage server has changed from 6000 to 6200 and will be changed in a later release[0m >2018-06-26 09:49:52,158 INFO: [1;33mWarning: Scope(Class[Swift::Storage::All]): The default port for the container storage server has changed from 6001 to 6201 and will be changed in a later release[0m >2018-06-26 09:49:52,158 INFO: [1;33mWarning: Scope(Class[Swift::Storage::All]): The default port for the account storage server has changed from 6002 to 6202 and will be changed in a later release[0m >2018-06-26 09:49:52,508 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 09:49:52,508 INFO: with Stdlib::Compat::Integer. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/heat/manifests/wsgi/apache_api_cfn.pp", 125]:["/etc/puppet/manifests/puppet-stack-config.pp", 510] >2018-06-26 09:49:52,508 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 09:49:52,822 INFO: [1;33mWarning: Unknown variable: '::ironic::conductor::swift_account'. at /etc/puppet/modules/ironic/manifests/glance.pp:117:30[0m >2018-06-26 09:49:52,823 INFO: [1;33mWarning: Unknown variable: '::ironic::conductor::swift_temp_url_key'. at /etc/puppet/modules/ironic/manifests/glance.pp:118:35[0m >2018-06-26 09:49:52,823 INFO: [1;33mWarning: Unknown variable: '::ironic::conductor::swift_temp_url_duration'. at /etc/puppet/modules/ironic/manifests/glance.pp:119:40[0m >2018-06-26 09:49:52,839 INFO: [1;33mWarning: Unknown variable: '::ironic::api::neutron_url'. at /etc/puppet/modules/ironic/manifests/neutron.pp:58:29[0m >2018-06-26 09:49:53,545 INFO: [1;33mWarning: ModuleLoader: module 'mistral' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:49:53,546 INFO: (file & line not available)[0m >2018-06-26 09:49:53,702 INFO: [1;33mWarning: Unknown variable: '::mistral::database_idle_timeout'. at /etc/puppet/modules/mistral/manifests/db.pp:57:40[0m >2018-06-26 09:49:53,703 INFO: [1;33mWarning: Unknown variable: '::mistral::database_min_pool_size'. at /etc/puppet/modules/mistral/manifests/db.pp:58:40[0m >2018-06-26 09:49:53,703 INFO: [1;33mWarning: Unknown variable: '::mistral::database_max_pool_size'. at /etc/puppet/modules/mistral/manifests/db.pp:59:40[0m >2018-06-26 09:49:53,704 INFO: [1;33mWarning: Unknown variable: '::mistral::database_max_retries'. at /etc/puppet/modules/mistral/manifests/db.pp:60:40[0m >2018-06-26 09:49:53,704 INFO: [1;33mWarning: Unknown variable: '::mistral::database_retry_interval'. at /etc/puppet/modules/mistral/manifests/db.pp:61:40[0m >2018-06-26 09:49:53,704 INFO: [1;33mWarning: Unknown variable: '::mistral::database_max_overflow'. at /etc/puppet/modules/mistral/manifests/db.pp:62:40[0m >2018-06-26 09:49:53,747 INFO: [1;33mWarning: Scope(Class[Mistral]): mistral::rabbit_host, mistral::rabbit_hosts, mistral::rabbit_password, mistral::rabbit_port, mistral::rabbit_userid, mistral::rabbit_virtual_host and mistral::rpc_backend are deprecated. Please use mistral::default_transport_url instead.[0m >2018-06-26 09:49:53,922 INFO: [1;33mWarning: ModuleLoader: module 'zaqar' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:49:53,922 INFO: (file & line not available)[0m >2018-06-26 09:49:54,810 INFO: [1;33mWarning: ModuleLoader: module 'oslo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 09:49:54,811 INFO: (file & line not available)[0m >2018-06-26 09:49:54,875 INFO: [1;33mWarning: Scope(Oslo::Messaging::Rabbit[keystone_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead.[0m >2018-06-26 09:49:55,559 INFO: [1;33mWarning: Scope(Oslo::Messaging::Rabbit[glance_api_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead.[0m >2018-06-26 09:49:55,568 INFO: [1;33mWarning: Scope(Oslo::Messaging::Rabbit[glance_registry_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead.[0m >2018-06-26 09:49:55,747 INFO: [1;33mWarning: Scope(Oslo::Messaging::Rabbit[neutron_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead.[0m >2018-06-26 09:49:55,789 INFO: [1;33mWarning: Scope(Neutron::Plugins::Ml2::Type_driver[local]): local type_driver is useful only for single-box, because it provides no connectivity between hosts[0m >2018-06-26 09:49:56,243 INFO: [1;33mWarning: Scope(Oslo::Messaging::Rabbit[mistral_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead.[0m >2018-06-26 09:49:59,222 INFO: [mNotice: Compiled catalog for facebook.local.com in environment production in 12.93 seconds[0m >2018-06-26 09:50:05,899 INFO: [mNotice: /Stage[setup]/Tripleo::Network::Os_net_config/Exec[os-net-config]/returns: executed successfully[0m >2018-06-26 09:50:05,916 INFO: [mNotice: /Stage[setup]/Tripleo::Network::Os_net_config/Exec[trigger-keepalived-restart]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:50:07,568 INFO: [mNotice: /Stage[main]/Main/File[/etc/systemd/system/mariadb.service.d]/seltype: seltype changed 'mysqld_unit_file_t' to 'systemd_unit_file_t'[0m >2018-06-26 09:50:24,601 INFO: [mNotice: /Stage[main]/Neutron/Package[neutron]/ensure: created[0m >2018-06-26 09:50:24,613 INFO: [mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:50:24,623 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_host]/ensure: created[0m >2018-06-26 09:50:24,632 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created[0m >2018-06-26 09:50:24,637 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created[0m >2018-06-26 09:50:24,654 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created[0m >2018-06-26 09:50:24,660 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created[0m >2018-06-26 09:50:24,695 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created[0m >2018-06-26 09:50:24,710 INFO: [mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created[0m >2018-06-26 09:50:24,937 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha]/ensure: created[0m >2018-06-26 09:50:24,942 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/max_l3_agents_per_router]/ensure: created[0m >2018-06-26 09:50:24,952 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/api_workers]/ensure: created[0m >2018-06-26 09:50:24,957 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/rpc_workers]/ensure: created[0m >2018-06-26 09:50:24,970 INFO: [mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_scheduler_driver]/ensure: created[0m >2018-06-26 09:50:25,016 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_url]/ensure: created[0m >2018-06-26 09:50:25,021 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/username]/ensure: created[0m >2018-06-26 09:50:25,025 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/password]/ensure: created[0m >2018-06-26 09:50:25,030 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_id]/ensure: created[0m >2018-06-26 09:50:25,035 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_name]/ensure: created[0m >2018-06-26 09:50:25,040 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_name]/ensure: created[0m >2018-06-26 09:50:25,045 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_id]/ensure: created[0m >2018-06-26 09:50:25,050 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_name]/ensure: created[0m >2018-06-26 09:50:25,062 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_type]/ensure: created[0m >2018-06-26 09:50:25,067 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/tenant_name]/ensure: created[0m >2018-06-26 09:50:25,073 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_status_changes]/ensure: created[0m >2018-06-26 09:50:25,078 INFO: [mNotice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_data_changes]/ensure: created[0m >2018-06-26 09:50:25,301 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_port]/ensure: created[0m >2018-06-26 09:50:25,332 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_firewall_rule]/ensure: created[0m >2018-06-26 09:50:25,345 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_network_gateway]/ensure: created[0m >2018-06-26 09:50:25,349 INFO: [mNotice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_packet_filter]/ensure: created[0m >2018-06-26 09:50:25,366 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created[0m >2018-06-26 09:50:25,368 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created[0m >2018-06-26 09:50:25,370 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created[0m >2018-06-26 09:50:25,372 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created[0m >2018-06-26 09:50:25,374 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created[0m >2018-06-26 09:50:25,376 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created[0m >2018-06-26 09:50:25,378 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created[0m >2018-06-26 09:50:25,384 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/auth_type]/ensure: created[0m >2018-06-26 09:50:25,385 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/auth_url]/ensure: created[0m >2018-06-26 09:50:25,385 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/username]/ensure: created[0m >2018-06-26 09:50:25,386 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/password]/ensure: created[0m >2018-06-26 09:50:25,387 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/project_domain_id]/ensure: created[0m >2018-06-26 09:50:25,388 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/project_domain_name]/ensure: created[0m >2018-06-26 09:50:25,389 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/project_name]/ensure: created[0m >2018-06-26 09:50:25,390 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/user_domain_id]/ensure: created[0m >2018-06-26 09:50:25,391 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/user_domain_name]/ensure: created[0m >2018-06-26 09:50:25,391 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Ironic_neutron_agent_config[ironic/region_name]/ensure: created[0m >2018-06-26 09:50:25,394 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_isolated_metadata]/ensure: created[0m >2018-06-26 09:50:25,397 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_metadata_network]/ensure: created[0m >2018-06-26 09:50:25,400 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/state_path]/ensure: created[0m >2018-06-26 09:50:25,402 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/resync_interval]/ensure: created[0m >2018-06-26 09:50:25,404 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/interface_driver]/ensure: created[0m >2018-06-26 09:50:25,408 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/root_helper]/ensure: created[0m >2018-06-26 09:50:25,411 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_config_file]/ensure: created[0m >2018-06-26 09:50:25,423 INFO: [mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/interface_driver]/ensure: created[0m >2018-06-26 09:50:25,432 INFO: [mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/agent_mode]/ensure: created[0m >2018-06-26 09:50:25,437 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created[0m >2018-06-26 09:50:25,443 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created[0m >2018-06-26 09:50:25,448 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created[0m >2018-06-26 09:50:25,455 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created[0m >2018-06-26 09:50:25,656 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[ovs-cleanup-service]/enable: enable changed 'false' to 'true'[0m >2018-06-26 09:50:25,663 INFO: [mNotice: /Stage[main]/Main/Neutron_config[DEFAULT/notification_driver]/ensure: created[0m >2018-06-26 09:50:28,894 INFO: [mNotice: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d]/seltype: seltype changed 'container_unit_file_t' to 'systemd_unit_file_t'[0m >2018-06-26 09:50:48,735 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created[0m >2018-06-26 09:50:48,752 INFO: [mNotice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created[0m >2018-06-26 09:50:48,823 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created[0m >2018-06-26 09:50:49,108 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created[0m >2018-06-26 09:50:49,156 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_password]/ensure: created[0m >2018-06-26 09:50:49,175 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_userid]/ensure: created[0m >2018-06-26 09:50:49,184 INFO: [mNotice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_hosts]/ensure: created[0m >2018-06-26 09:50:49,611 INFO: [mNotice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/connection]/ensure: created[0m >2018-06-26 09:50:49,678 INFO: [mNotice: /Stage[main]/Neutron::Policy/Oslo::Policy[neutron_config]/Neutron_config[oslo_policy/policy_file]/ensure: created[0m >2018-06-26 09:50:49,694 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_uri]/ensure: created[0m >2018-06-26 09:50:49,699 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_type]/ensure: created[0m >2018-06-26 09:50:50,070 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_url]/ensure: created[0m >2018-06-26 09:50:50,075 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/username]/ensure: created[0m >2018-06-26 09:50:50,080 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/password]/ensure: created[0m >2018-06-26 09:50:50,085 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/user_domain_name]/ensure: created[0m >2018-06-26 09:50:50,091 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_name]/ensure: created[0m >2018-06-26 09:50:50,096 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_domain_name]/ensure: created[0m >2018-06-26 09:50:50,109 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created[0m >2018-06-26 09:50:50,112 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created[0m >2018-06-26 09:50:50,114 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created[0m >2018-06-26 09:50:50,116 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created[0m >2018-06-26 09:50:50,118 INFO: [mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created[0m >2018-06-26 09:50:50,119 INFO: [mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]: Triggered 'refresh' from 81 events[0m >2018-06-26 09:50:50,145 INFO: [mNotice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[neutron]/ensure: created[0m >2018-06-26 09:51:01,537 INFO: [mNotice: /Stage[main]/Glance::Db::Sync/Exec[glance-manage db_sync]/returns: executed successfully[0m >2018-06-26 09:51:01,538 INFO: [mNotice: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:03,279 INFO: [mNotice: /Stage[main]/Glance::Db::Metadefs/Exec[glance-manage db_load_metadefs]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:06,137 INFO: [mNotice: /Stage[main]/Nova::Db::Sync_api/Exec[nova-db-sync-api]/returns: executed successfully[0m >2018-06-26 09:51:06,138 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:06,139 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:06,628 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:09,733 INFO: [mNotice: /Stage[main]/Nova::Cell_v2::Map_cell0/Exec[nova-cell_v2-map_cell0]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:09,735 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:09,736 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]: Triggered 'refresh' from 2 events[0m >2018-06-26 09:51:21,096 INFO: [mNotice: /Stage[main]/Nova::Db::Sync/Exec[nova-db-sync]/returns: executed successfully[0m >2018-06-26 09:51:24,393 INFO: [mNotice: /Stage[main]/Nova::Db::Sync/Exec[nova-db-sync]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:24,395 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::end]: Triggered 'refresh' from 2 events[0m >2018-06-26 09:51:24,395 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::service::begin]: Triggered 'refresh' from 2 events[0m >2018-06-26 09:51:27,581 INFO: [mNotice: /Stage[main]/Nova::Cron::Archive_deleted_rows/Cron[nova-manage db archive_deleted_rows]/ensure: created[0m >2018-06-26 09:51:30,965 INFO: [mNotice: /Stage[main]/Nova::Conductor/Nova::Generic_service[conductor]/Service[nova-conductor]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:51:34,469 INFO: [mNotice: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Service[nova-scheduler]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:51:34,560 INFO: [mNotice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[neutron_%]/Mysql_user[neutron@%]/ensure: created[0m >2018-06-26 09:51:34,589 INFO: [mNotice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[neutron_%]/Mysql_grant[neutron@%/neutron.*]/ensure: created[0m >2018-06-26 09:51:34,698 INFO: [mNotice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[neutron_192.0.3.1]/Mysql_user[neutron@192.0.3.1]/ensure: created[0m >2018-06-26 09:51:34,726 INFO: [mNotice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[neutron_192.0.3.1]/Mysql_grant[neutron@192.0.3.1/neutron.*]/ensure: created[0m >2018-06-26 09:51:34,755 INFO: [mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:34,755 INFO: [mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:37,445 INFO: [mNotice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: executed successfully[0m >2018-06-26 09:51:38,694 INFO: [mNotice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]: Triggered 'refresh' from 3 events[0m >2018-06-26 09:51:38,695 INFO: [mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]: Triggered 'refresh' from 2 events[0m >2018-06-26 09:51:38,696 INFO: [mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::service::begin]: Triggered 'refresh' from 3 events[0m >2018-06-26 09:51:39,134 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:51:39,589 INFO: [mNotice: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:51:40,107 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:51:41,776 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-destroy-patch-ports-service]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:51:42,966 INFO: [mNotice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: executed successfully[0m >2018-06-26 09:51:42,967 INFO: [mNotice: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:42,968 INFO: [mNotice: /Stage[main]/Heat::Deps/Anchor[heat::service::begin]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:43,048 INFO: [mNotice: /Stage[main]/Heat::Api/Service[heat-api]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:43,120 INFO: [mNotice: /Stage[main]/Heat::Api_cfn/Service[heat-api-cfn]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:43,556 INFO: [mNotice: /Stage[main]/Heat::Engine/Service[heat-engine]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:51:44,917 INFO: [mNotice: /Stage[main]/Ironic::Db::Sync/Exec[ironic-dbsync]/returns: executed successfully[0m >2018-06-26 09:51:44,918 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::dbsync::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:44,918 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::db_online_data_migrations::begin]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:47,789 INFO: [mNotice: /Stage[main]/Ironic::Db::Online_data_migrations/Exec[ironic-db-online-data-migrations]/returns: executed successfully[0m >2018-06-26 09:51:50,574 INFO: [mNotice: /Stage[main]/Ironic::Db::Online_data_migrations/Exec[ironic-db-online-data-migrations]: Triggered 'refresh' from 2 events[0m >2018-06-26 09:51:50,575 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::db_online_data_migrations::end]: Triggered 'refresh' from 2 events[0m >2018-06-26 09:51:50,575 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::service::begin]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:50,649 INFO: [mNotice: /Stage[main]/Ironic::Api/Service[ironic-api]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:51,510 INFO: [mNotice: /Stage[main]/Ironic::Conductor/Service[ironic-conductor]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:51:51,512 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::service::end]: Triggered 'refresh' from 2 events[0m >2018-06-26 09:51:51,599 INFO: [mNotice: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:56,099 INFO: [mNotice: /Stage[main]/Mistral::Db::Sync/Exec[mistral-db-sync]/returns: executed successfully[0m >2018-06-26 09:51:56,620 INFO: [mNotice: /Stage[main]/Main/Zaqar::Server_instance[1]/Service[openstack-zaqar@1]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:51:56,621 INFO: [mNotice: /Stage[main]/Zaqar::Deps/Anchor[zaqar::service::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:51:59,423 INFO: [mNotice: /Stage[main]/Keystone::Db::Sync/Exec[keystone-manage db_sync]/returns: executed successfully[0m >2018-06-26 09:51:59,424 INFO: [mNotice: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:52:00,900 INFO: [mNotice: /Stage[main]/Keystone/Exec[keystone-manage bootstrap]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:52:00,901 INFO: [mNotice: /Stage[main]/Keystone::Deps/Anchor[keystone::service::begin]: Triggered 'refresh' from 2 events[0m >2018-06-26 09:52:02,045 INFO: [mNotice: /Stage[main]/Apache::Service/Service[httpd]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:52:02,046 INFO: [mNotice: /Stage[main]/Keystone::Deps/Anchor[keystone::service::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:52:06,342 INFO: [mNotice: /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_user]/ensure: created[0m >2018-06-26 09:52:08,593 INFO: [mNotice: /Stage[main]/Swift::Keystone::Auth/Keystone_role[swiftoperator]/ensure: created[0m >2018-06-26 09:52:12,864 INFO: [mNotice: /Stage[main]/Heat::Keystone::Domain/Keystone_domain[heat_stack]/ensure: created[0m >2018-06-26 09:52:16,881 INFO: [mNotice: /Stage[main]/Heat::Keystone::Domain/Keystone_user[heat_admin::heat_stack]/ensure: created[0m >2018-06-26 09:52:20,699 INFO: [mNotice: /Stage[main]/Heat::Keystone::Domain/Keystone_user_role[heat_admin::heat_stack@::heat_stack]/ensure: created[0m >2018-06-26 09:52:23,656 INFO: [mNotice: /Stage[main]/Keystone::Endpoint/Keystone::Resource::Service_identity[keystone]/Keystone_service[keystone::identity]/ensure: created[0m >2018-06-26 09:52:27,851 INFO: [mNotice: /Stage[main]/Keystone::Endpoint/Keystone::Resource::Service_identity[keystone]/Keystone_endpoint[regionOne/keystone::identity]/ensure: created[0m >2018-06-26 09:52:27,864 INFO: [1;33mWarning: Puppet::Type::Keystone_tenant::ProviderOpenstack: Support for a resource without the domain set is deprecated in Liberty cycle. It will be dropped in the M-cycle. Currently using 'Default' as default domain name while the default domain id is 'c4a7f8d78acf4179bef63ac8f8f0c591'.[0m >2018-06-26 09:52:30,945 INFO: [mNotice: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[service]/ensure: created[0m >2018-06-26 09:52:30,947 INFO: [mNotice: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[admin]/description: description changed 'Bootstrap project for initializing the cloud.' to 'admin tenant'[0m >2018-06-26 09:52:34,625 INFO: [mNotice: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]/email: defined 'email' as 'root@localhost'[0m >2018-06-26 09:52:41,579 INFO: [mNotice: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_user[heat]/ensure: created[0m >2018-06-26 09:52:44,859 INFO: [mNotice: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_user_role[heat@service]/ensure: created[0m >2018-06-26 09:52:45,778 INFO: [mNotice: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_service[heat::orchestration]/ensure: created[0m >2018-06-26 09:52:49,003 INFO: [mNotice: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_endpoint[regionOne/heat::orchestration]/ensure: created[0m >2018-06-26 09:52:52,276 INFO: [mNotice: /Stage[main]/Heat::Keystone::Auth_cfn/Keystone::Resource::Service_identity[heat-cfn]/Keystone_user[heat-cfn]/ensure: created[0m >2018-06-26 09:52:56,059 INFO: [mNotice: /Stage[main]/Heat::Keystone::Auth_cfn/Keystone::Resource::Service_identity[heat-cfn]/Keystone_user_role[heat-cfn@service]/ensure: created[0m >2018-06-26 09:52:56,060 INFO: [mNotice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Triggered 'refresh' from 5 events[0m >2018-06-26 09:52:56,857 INFO: [mNotice: /Stage[main]/Heat::Keystone::Auth_cfn/Keystone::Resource::Service_identity[heat-cfn]/Keystone_service[heat-cfn::cloudformation]/ensure: created[0m >2018-06-26 09:53:00,277 INFO: [mNotice: /Stage[main]/Heat::Keystone::Auth_cfn/Keystone::Resource::Service_identity[heat-cfn]/Keystone_endpoint[regionOne/heat-cfn::cloudformation]/ensure: created[0m >2018-06-26 09:53:02,237 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user[neutron]/ensure: created[0m >2018-06-26 09:53:06,056 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user_role[neutron@service]/ensure: created[0m >2018-06-26 09:53:06,858 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_service[neutron::network]/ensure: created[0m >2018-06-26 09:53:10,171 INFO: [mNotice: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_endpoint[regionOne/neutron::network]/ensure: created[0m >2018-06-26 09:53:17,437 INFO: [mNotice: /Stage[main]/Neutron::Server/Service[neutron-server]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:53:19,381 INFO: [mNotice: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user[glance]/ensure: created[0m >2018-06-26 09:53:21,865 INFO: [mNotice: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user_role[glance@service]/ensure: created[0m >2018-06-26 09:53:22,656 INFO: [mNotice: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_service[glance::image]/ensure: created[0m >2018-06-26 09:53:25,883 INFO: [mNotice: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_endpoint[regionOne/glance::image]/ensure: created[0m >2018-06-26 09:53:25,884 INFO: [mNotice: /Stage[main]/Glance::Deps/Anchor[glance::service::begin]: Triggered 'refresh' from 3 events[0m >2018-06-26 09:53:27,858 INFO: [mNotice: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova]/Keystone_user[nova]/ensure: created[0m >2018-06-26 09:53:30,381 INFO: [mNotice: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova]/Keystone_user_role[nova@service]/ensure: created[0m >2018-06-26 09:53:31,327 INFO: [mNotice: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova]/Keystone_service[nova::compute]/ensure: created[0m >2018-06-26 09:53:34,524 INFO: [mNotice: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova]/Keystone_endpoint[regionOne/nova::compute]/ensure: created[0m >2018-06-26 09:53:36,517 INFO: [mNotice: /Stage[main]/Nova::Keystone::Auth_placement/Keystone::Resource::Service_identity[placement]/Keystone_user[placement]/ensure: created[0m >2018-06-26 09:53:39,024 INFO: [mNotice: /Stage[main]/Nova::Keystone::Auth_placement/Keystone::Resource::Service_identity[placement]/Keystone_user_role[placement@service]/ensure: created[0m >2018-06-26 09:53:39,859 INFO: [mNotice: /Stage[main]/Nova::Keystone::Auth_placement/Keystone::Resource::Service_identity[placement]/Keystone_service[placement::placement]/ensure: created[0m >2018-06-26 09:53:43,108 INFO: [mNotice: /Stage[main]/Nova::Keystone::Auth_placement/Keystone::Resource::Service_identity[placement]/Keystone_endpoint[regionOne/placement::placement]/ensure: created[0m >2018-06-26 09:53:45,040 INFO: [mNotice: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_user[swift]/ensure: created[0m >2018-06-26 09:53:47,617 INFO: [mNotice: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_user_role[swift@service]/ensure: created[0m >2018-06-26 09:53:48,415 INFO: [mNotice: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_service[swift::object-store]/ensure: created[0m >2018-06-26 09:53:51,708 INFO: [mNotice: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_endpoint[regionOne/swift::object-store]/ensure: created[0m >2018-06-26 09:53:53,788 INFO: [mNotice: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_user[ironic]/ensure: created[0m >2018-06-26 09:53:56,364 INFO: [mNotice: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_user_role[ironic@service]/ensure: created[0m >2018-06-26 09:53:57,172 INFO: [mNotice: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_service[ironic::baremetal]/ensure: created[0m >2018-06-26 09:54:00,407 INFO: [mNotice: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_endpoint[regionOne/ironic::baremetal]/ensure: created[0m >2018-06-26 09:54:02,440 INFO: [mNotice: /Stage[main]/Ironic::Keystone::Auth_inspector/Keystone::Resource::Service_identity[ironic-inspector]/Keystone_user[ironic-inspector]/ensure: created[0m >2018-06-26 09:54:04,980 INFO: [mNotice: /Stage[main]/Ironic::Keystone::Auth_inspector/Keystone::Resource::Service_identity[ironic-inspector]/Keystone_user_role[ironic-inspector@service]/ensure: created[0m >2018-06-26 09:54:05,809 INFO: [mNotice: /Stage[main]/Ironic::Keystone::Auth_inspector/Keystone::Resource::Service_identity[ironic-inspector]/Keystone_service[ironic-inspector::baremetal-introspection]/ensure: created[0m >2018-06-26 09:54:09,649 INFO: [mNotice: /Stage[main]/Ironic::Keystone::Auth_inspector/Keystone::Resource::Service_identity[ironic-inspector]/Keystone_endpoint[regionOne/ironic-inspector::baremetal-introspection]/ensure: created[0m >2018-06-26 09:54:12,753 INFO: [mNotice: /Stage[main]/Nova::Api/Nova::Generic_service[api]/Service[nova-api]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:54:13,334 INFO: [mNotice: /Stage[main]/Swift::Proxy/Swift::Service[swift-proxy-server]/Service[swift-proxy-server]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:54:13,921 INFO: [mNotice: /Stage[main]/Swift::Objectexpirer/Swift::Service[swift-object-expirer]/Service[swift-object-expirer]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:54:15,935 INFO: [mNotice: /Stage[main]/Mistral::Keystone::Auth/Keystone::Resource::Service_identity[mistral]/Keystone_user[mistral]/ensure: created[0m >2018-06-26 09:54:18,551 INFO: [mNotice: /Stage[main]/Mistral::Keystone::Auth/Keystone::Resource::Service_identity[mistral]/Keystone_user_role[mistral@service]/ensure: created[0m >2018-06-26 09:54:19,344 INFO: [mNotice: /Stage[main]/Mistral::Keystone::Auth/Keystone::Resource::Service_identity[mistral]/Keystone_service[mistral::workflowv2]/ensure: created[0m >2018-06-26 09:54:22,539 INFO: [mNotice: /Stage[main]/Mistral::Keystone::Auth/Keystone::Resource::Service_identity[mistral]/Keystone_endpoint[regionOne/mistral::workflowv2]/ensure: created[0m >2018-06-26 09:54:24,536 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Auth/Keystone::Resource::Service_identity[zaqar]/Keystone_user[zaqar]/ensure: created[0m >2018-06-26 09:54:25,319 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Auth/Keystone::Resource::Service_identity[zaqar]/Keystone_role[ResellerAdmin]/ensure: created[0m >2018-06-26 09:54:28,735 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Auth/Keystone::Resource::Service_identity[zaqar]/Keystone_user_role[zaqar@service]/ensure: created[0m >2018-06-26 09:54:29,481 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Auth/Keystone::Resource::Service_identity[zaqar]/Keystone_service[zaqar::messaging]/ensure: created[0m >2018-06-26 09:54:32,782 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Auth/Keystone::Resource::Service_identity[zaqar]/Keystone_endpoint[regionOne/zaqar::messaging]/ensure: created[0m >2018-06-26 09:54:34,750 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Auth_websocket/Keystone::Resource::Service_identity[zaqar-websocket]/Keystone_user[zaqar-websocket]/ensure: created[0m >2018-06-26 09:54:37,313 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Auth_websocket/Keystone::Resource::Service_identity[zaqar-websocket]/Keystone_user_role[zaqar-websocket@service]/ensure: created[0m >2018-06-26 09:54:38,110 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Auth_websocket/Keystone::Resource::Service_identity[zaqar-websocket]/Keystone_service[zaqar-websocket::messaging-websocket]/ensure: created[0m >2018-06-26 09:54:41,354 INFO: [mNotice: /Stage[main]/Zaqar::Keystone::Auth_websocket/Keystone::Resource::Service_identity[zaqar-websocket]/Keystone_endpoint[regionOne/zaqar-websocket::messaging-websocket]/ensure: created[0m >2018-06-26 09:54:41,905 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Service[ironic-neutron-agent-service]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:54:41,907 INFO: [mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::service::end]: Triggered 'refresh' from 6 events[0m >2018-06-26 09:54:42,503 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Service[ironic-inspector]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:54:43,096 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Service[ironic-inspector-dnsmasq]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:54:43,098 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic-inspector::service::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:54:52,240 INFO: [mNotice: /Stage[main]/Mistral::Db::Sync/Exec[mistral-db-populate]/returns: executed successfully[0m >2018-06-26 09:55:00,906 INFO: [mNotice: /Stage[main]/Mistral::Db::Sync/Exec[mistral-db-populate]: Triggered 'refresh' from 2 events[0m >2018-06-26 09:55:00,907 INFO: [mNotice: /Stage[main]/Mistral::Deps/Anchor[mistral::dbsync::end]: Triggered 'refresh' from 3 events[0m >2018-06-26 09:55:00,908 INFO: [mNotice: /Stage[main]/Mistral::Deps/Anchor[mistral::service::begin]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:55:01,461 INFO: [mNotice: /Stage[main]/Mistral::Api/Service[mistral-api]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:55:02,124 INFO: [mNotice: /Stage[main]/Mistral::Engine/Service[mistral-engine]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:55:02,757 INFO: [mNotice: /Stage[main]/Mistral::Executor/Service[mistral-executor]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:55:02,758 INFO: [mNotice: /Stage[main]/Mistral::Deps/Anchor[mistral::service::end]: Triggered 'refresh' from 3 events[0m >2018-06-26 09:55:06,293 INFO: [mNotice: /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Service[nova-compute]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:55:06,295 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::service::end]: Triggered 'refresh' from 4 events[0m >2018-06-26 09:55:06,300 INFO: [mNotice: /Stage[main]/Nova::Logging/File[/var/log/nova/nova-manage.log]/seluser: seluser changed 'unconfined_u' to 'system_u'[0m >2018-06-26 09:55:09,321 INFO: [mNotice: /Stage[main]/Nova::Cell_v2::Discover_hosts/Exec[nova-cell_v2-discover_hosts]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:55:09,911 INFO: [mNotice: /Stage[main]/Swift::Storage::Account/Swift::Service[swift-account-reaper]/Service[swift-account-reaper]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:55:10,532 INFO: [mNotice: /Stage[main]/Swift::Storage::Container/Swift::Service[swift-container-updater]/Service[swift-container-updater]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:55:11,146 INFO: [mNotice: /Stage[main]/Swift::Storage::Container/Swift::Service[swift-container-sync]/Service[swift-container-sync]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:55:11,777 INFO: [mNotice: /Stage[main]/Swift::Storage::Object/Swift::Service[swift-object-updater]/Service[swift-object-updater]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:55:12,828 INFO: [mNotice: /Stage[main]/Swift::Storage::Object/Swift::Service[swift-object-reconstructor]/Service[swift-object-reconstructor]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:55:13,476 INFO: [mNotice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/Swift::Service[swift-account-server]/Service[swift-account-server]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:55:14,257 INFO: [mNotice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/Swift::Service[swift-container-server]/Service[swift-container-server]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:55:15,087 INFO: [mNotice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/Swift::Service[swift-object-server]/Service[swift-object-server]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:55:15,303 INFO: [mNotice: /Stage[main]/Swift::Deps/Anchor[swift::service::end]: Triggered 'refresh' from 10 events[0m >2018-06-26 09:55:15,884 INFO: [mNotice: /Stage[main]/Glance::Api/Service[glance-api]/ensure: ensure changed 'stopped' to 'running'[0m >2018-06-26 09:55:15,887 INFO: [mNotice: /Stage[main]/Glance::Deps/Anchor[glance::service::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 09:55:17,133 INFO: [mNotice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created[0m >2018-06-26 09:55:17,865 INFO: [mNotice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created[0m >2018-06-26 09:55:18,990 INFO: [mNotice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created[0m >2018-06-26 09:55:19,673 INFO: [mNotice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created[0m >2018-06-26 09:55:19,730 INFO: [mNotice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'[0m >2018-06-26 09:55:19,733 INFO: [mNotice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'[0m >2018-06-26 09:55:21,411 INFO: [mNotice: Applied catalog in 318.85 seconds[0m >2018-06-26 09:55:22,005 INFO: Changes: >2018-06-26 09:55:22,005 INFO: Total: 198 >2018-06-26 09:55:22,005 INFO: Events: >2018-06-26 09:55:22,005 INFO: Success: 198 >2018-06-26 09:55:22,005 INFO: Total: 198 >2018-06-26 09:55:22,005 INFO: Resources: >2018-06-26 09:55:22,006 INFO: Changed: 198 >2018-06-26 09:55:22,006 INFO: Out of sync: 198 >2018-06-26 09:55:22,006 INFO: Total: 2768 >2018-06-26 09:55:22,006 INFO: Restarted: 48 >2018-06-26 09:55:22,006 INFO: Time: >2018-06-26 09:55:22,006 INFO: Policy rcd: 0.00 >2018-06-26 09:55:22,006 INFO: Schedule: 0.00 >2018-06-26 09:55:22,006 INFO: Mysql datadir: 0.00 >2018-06-26 09:55:22,006 INFO: Sysctl: 0.00 >2018-06-26 09:55:22,006 INFO: Sysctl runtime: 0.00 >2018-06-26 09:55:22,007 INFO: Nova cell v2: 0.00 >2018-06-26 09:55:22,007 INFO: Group: 0.00 >2018-06-26 09:55:22,007 INFO: Archive: 0.00 >2018-06-26 09:55:22,007 INFO: Neutron api config: 0.00 >2018-06-26 09:55:22,007 INFO: Swift config: 0.00 >2018-06-26 09:55:22,007 INFO: Resources: 0.00 >2018-06-26 09:55:22,007 INFO: Glance swift config: 0.00 >2018-06-26 09:55:22,007 INFO: User: 0.00 >2018-06-26 09:55:22,007 INFO: Concat file: 0.00 >2018-06-26 09:55:22,007 INFO: Swift object expirer config: 0.00 >2018-06-26 09:55:22,007 INFO: Nova paste api ini: 0.00 >2018-06-26 09:55:22,008 INFO: Ironic neutron agent config: 0.01 >2018-06-26 09:55:22,008 INFO: Concat fragment: 0.01 >2018-06-26 09:55:22,008 INFO: Anchor: 0.01 >2018-06-26 09:55:22,008 INFO: Neutron l3 agent config: 0.01 >2018-06-26 09:55:22,008 INFO: Neutron plugin ml2: 0.02 >2018-06-26 09:55:22,008 INFO: Neutron agent ovs: 0.02 >2018-06-26 09:55:22,008 INFO: Neutron dhcp agent config: 0.02 >2018-06-26 09:55:22,008 INFO: Mysql database: 0.03 >2018-06-26 09:55:22,008 INFO: Cron: 0.03 >2018-06-26 09:55:22,008 INFO: Vs bridge: 0.04 >2018-06-26 09:55:22,009 INFO: Swift proxy config: 0.06 >2018-06-26 09:55:22,009 INFO: Mysql grant: 0.11 >2018-06-26 09:55:22,009 INFO: Glance registry config: 0.14 >2018-06-26 09:55:22,009 INFO: Mysql user: 0.18 >2018-06-26 09:55:22,009 INFO: Glance cache config: 0.26 >2018-06-26 09:55:22,009 INFO: Ring container device: 0.27 >2018-06-26 09:55:22,009 INFO: Ring object device: 0.28 >2018-06-26 09:55:22,009 INFO: Ring account device: 0.29 >2018-06-26 09:55:22,010 INFO: Augeas: 0.54 >2018-06-26 09:55:22,010 INFO: Mistral config: 0.59 >2018-06-26 09:55:22,010 INFO: File: 0.61 >2018-06-26 09:55:22,010 INFO: Zaqar config: 0.67 >2018-06-26 09:55:22,010 INFO: Ironic inspector config: 0.75 >2018-06-26 09:55:22,010 INFO: Rabbitmq plugin: 1.26 >2018-06-26 09:55:22,010 INFO: Keystone config: 1.64 >2018-06-26 09:55:22,010 INFO: Keystone tenant: 1.68 >2018-06-26 09:55:22,010 INFO: Keystone service: 12.00 >2018-06-26 09:55:22,011 INFO: Nova config: 15.14 >2018-06-26 09:55:22,011 INFO: Package: 15.84 >2018-06-26 09:55:22,011 INFO: Last run: 1529987122 >2018-06-26 09:55:22,011 INFO: Config retrieval: 16.03 >2018-06-26 09:55:22,011 INFO: Neutron config: 2.08 >2018-06-26 09:55:22,011 INFO: Keystone domain: 2.13 >2018-06-26 09:55:22,011 INFO: Glance api config: 2.27 >2018-06-26 09:55:22,011 INFO: Heat config: 2.94 >2018-06-26 09:55:22,011 INFO: Total: 289.38 >2018-06-26 09:55:22,011 INFO: Firewall: 3.85 >2018-06-26 09:55:22,012 INFO: Keystone user: 33.12 >2018-06-26 09:55:22,012 INFO: Service: 38.85 >2018-06-26 09:55:22,012 INFO: Exec: 39.74 >2018-06-26 09:55:22,012 INFO: Keystone user role: 42.37 >2018-06-26 09:55:22,012 INFO: Keystone endpoint: 43.09 >2018-06-26 09:55:22,012 INFO: Ironic config: 5.15 >2018-06-26 09:55:22,012 INFO: Keystone role: 5.25 >2018-06-26 09:55:22,012 INFO: Filebucket: 0.00 >2018-06-26 09:55:22,012 INFO: Version: >2018-06-26 09:55:22,012 INFO: Config: 1529986786 >2018-06-26 09:55:22,012 INFO: Puppet: 4.8.2 >2018-06-26 09:55:31,529 INFO: + rc=2 >2018-06-26 09:55:31,530 INFO: + set -e >2018-06-26 09:55:31,530 INFO: + echo 'puppet apply exited with exit code 2' >2018-06-26 09:55:31,530 INFO: puppet apply exited with exit code 2 >2018-06-26 09:55:31,530 INFO: + '[' 2 '!=' 2 -a 2 '!=' 0 ']' >2018-06-26 09:55:31,532 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 50-puppet-stack-config completed >2018-06-26 09:55:31,534 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 ----------------------- PROFILING ----------------------- >2018-06-26 09:55:31,535 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 >2018-06-26 09:55:31,537 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 Target: configure.d >2018-06-26 09:55:31,538 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 >2018-06-26 09:55:31,540 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 Script Seconds >2018-06-26 09:55:31,541 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 --------------------------------------- ---------- >2018-06-26 09:55:31,542 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 >2018-06-26 09:55:31,550 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 10-hiera-disable 0.003 >2018-06-26 09:55:31,556 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 20-os-apply-config 0.158 >2018-06-26 09:55:31,561 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 30-reload-keepalived 0.010 >2018-06-26 09:55:31,566 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 40-hiera-datafiles 0.155 >2018-06-26 09:55:31,571 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 50-puppet-stack-config 350.419 >2018-06-26 09:55:31,575 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 >2018-06-26 09:55:31,576 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 --------------------- END PROFILING --------------------- >2018-06-26 09:55:31,576 INFO: [2018-06-26 09:55:31,576] (os-refresh-config) [INFO] Completed phase configure >2018-06-26 09:55:31,577 INFO: [2018-06-26 09:55:31,577] (os-refresh-config) [INFO] Starting phase post-configure >2018-06-26 09:55:31,587 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 Running /usr/libexec/os-refresh-config/post-configure.d/10-iptables >2018-06-26 09:55:31,589 INFO: + set -o pipefail >2018-06-26 09:55:31,589 INFO: + EXTERNAL_BRIDGE=br-ctlplane >2018-06-26 09:55:31,589 INFO: + iptables -w -t nat -C PREROUTING -d 169.254.169.254/32 -i br-ctlplane -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8775 >2018-06-26 09:55:31,600 INFO: iptables: No chain/target/match by that name. >2018-06-26 09:55:31,600 INFO: + iptables -w -t nat -I PREROUTING -d 169.254.169.254/32 -i br-ctlplane -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8775 >2018-06-26 09:55:31,609 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 10-iptables completed >2018-06-26 09:55:31,610 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 Running /usr/libexec/os-refresh-config/post-configure.d/80-seedstack-masquerade >2018-06-26 09:55:31,612 INFO: + RULES_SCRIPT=/var/opt/undercloud-stack/masquerade >2018-06-26 09:55:31,612 INFO: + . /var/opt/undercloud-stack/masquerade >2018-06-26 09:55:31,613 INFO: ++ IPTCOMMAND=iptables >2018-06-26 09:55:31,613 INFO: ++ [[ 192.0.3.1 =~ : ]] >2018-06-26 09:55:31,613 INFO: ++ iptables -w -t nat -F BOOTSTACK_MASQ_NEW >2018-06-26 09:55:31,614 INFO: iptables: No chain/target/match by that name. >2018-06-26 09:55:31,614 INFO: ++ true >2018-06-26 09:55:31,614 INFO: ++ iptables -w -t nat -D POSTROUTING -j BOOTSTACK_MASQ_NEW >2018-06-26 09:55:31,615 INFO: iptables v1.4.21: Couldn't load target `BOOTSTACK_MASQ_NEW':No such file or directory >2018-06-26 09:55:31,615 INFO: >2018-06-26 09:55:31,615 INFO: Try `iptables -h' or 'iptables --help' for more information. >2018-06-26 09:55:31,615 INFO: ++ true >2018-06-26 09:55:31,615 INFO: ++ iptables -w -t nat -X BOOTSTACK_MASQ_NEW >2018-06-26 09:55:31,616 INFO: iptables: No chain/target/match by that name. >2018-06-26 09:55:31,616 INFO: ++ true >2018-06-26 09:55:31,616 INFO: ++ iptables -w -t nat -N BOOTSTACK_MASQ_NEW >2018-06-26 09:55:31,617 INFO: ++ NETWORK=192.0.3.0/24 >2018-06-26 09:55:31,617 INFO: ++ NETWORKS=192.0.3.0/24, >2018-06-26 09:55:31,617 INFO: ++ NETWORKS=192.0.3.0/24 >2018-06-26 09:55:31,618 INFO: ++ iptables -w -t nat -A BOOTSTACK_MASQ_NEW -s 192.0.3.0/24 -d 192.0.3.0/24 -j RETURN >2018-06-26 09:55:31,618 INFO: ++ iptables -w -t nat -A BOOTSTACK_MASQ_NEW -s 192.0.3.0/24 -j MASQUERADE >2018-06-26 09:55:31,620 INFO: ++ iptables -w -t nat -I POSTROUTING -j BOOTSTACK_MASQ_NEW >2018-06-26 09:55:31,621 INFO: ++ iptables -w -t nat -F BOOTSTACK_MASQ >2018-06-26 09:55:31,622 INFO: iptables: No chain/target/match by that name. >2018-06-26 09:55:31,622 INFO: ++ true >2018-06-26 09:55:31,622 INFO: ++ iptables -w -t nat -D POSTROUTING -j BOOTSTACK_MASQ >2018-06-26 09:55:31,623 INFO: iptables v1.4.21: Couldn't load target `BOOTSTACK_MASQ':No such file or directory >2018-06-26 09:55:31,623 INFO: >2018-06-26 09:55:31,623 INFO: Try `iptables -h' or 'iptables --help' for more information. >2018-06-26 09:55:31,623 INFO: ++ true >2018-06-26 09:55:31,623 INFO: ++ iptables -w -t nat -X BOOTSTACK_MASQ >2018-06-26 09:55:31,624 INFO: iptables: No chain/target/match by that name. >2018-06-26 09:55:31,624 INFO: ++ true >2018-06-26 09:55:31,624 INFO: ++ iptables -w -t nat -E BOOTSTACK_MASQ_NEW BOOTSTACK_MASQ >2018-06-26 09:55:31,625 INFO: ++ iptables -w -D FORWARD -j REJECT --reject-with icmp-host-prohibited >2018-06-26 09:55:31,627 INFO: + iptables-save >2018-06-26 09:55:31,629 INFO: + /bin/test -f /etc/sysconfig/iptables >2018-06-26 09:55:31,630 INFO: + /bin/grep -q neutron- /etc/sysconfig/iptables >2018-06-26 09:55:31,631 INFO: + /bin/test -f /etc/sysconfig/ip6tables >2018-06-26 09:55:31,632 INFO: + /bin/grep -q neutron- /etc/sysconfig/ip6tables >2018-06-26 09:55:31,633 INFO: + /bin/test -f /etc/sysconfig/iptables >2018-06-26 09:55:31,634 INFO: + /bin/grep -v '\-m comment \--comment' /etc/sysconfig/iptables >2018-06-26 09:55:31,634 INFO: + /bin/grep -q ironic-inspector >2018-06-26 09:55:31,635 INFO: + /bin/test -f /etc/sysconfig/ip6tables >2018-06-26 09:55:31,636 INFO: + /bin/grep -v '\-m comment \--comment' /etc/sysconfig/ip6tables >2018-06-26 09:55:31,636 INFO: + /bin/grep -q ironic-inspector >2018-06-26 09:55:31,639 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 80-seedstack-masquerade completed >2018-06-26 09:55:31,640 INFO: dib-run-parts Tue Jun 26 09:55:31 IST 2018 Running /usr/libexec/os-refresh-config/post-configure.d/98-undercloud-setup >2018-06-26 09:55:31,643 INFO: + source /root/tripleo-undercloud-passwords >2018-06-26 09:55:31,643 INFO: +++ sudo hiera admin_password >2018-06-26 09:55:31,724 INFO: ++ UNDERCLOUD_ADMIN_PASSWORD=password >2018-06-26 09:55:31,725 INFO: +++ sudo hiera keystone::admin_token >2018-06-26 09:55:31,800 INFO: ++ UNDERCLOUD_ADMIN_TOKEN=793411a45b5d715032738018d72e1b026ef47233 >2018-06-26 09:55:31,800 INFO: +++ sudo hiera ceilometer::metering_secret >2018-06-26 09:55:31,873 INFO: ++ UNDERCLOUD_CEILOMETER_METERING_SECRET=41b3e67e5f6dd821e4388ace4dd9bdf520440d2d >2018-06-26 09:55:31,873 INFO: +++ sudo hiera ceilometer::keystone::authtoken::password >2018-06-26 09:55:31,941 INFO: ++ UNDERCLOUD_CEILOMETER_PASSWORD=7acbbec3c68af1fcfeb044ff7532d21028ada2a8 >2018-06-26 09:55:31,942 INFO: +++ sudo hiera snmpd_readonly_user_password >2018-06-26 09:55:32,011 INFO: ++ UNDERCLOUD_CEILOMETER_SNMPD_PASSWORD=nil >2018-06-26 09:55:32,011 INFO: +++ sudo hiera snmpd_readonly_user_name >2018-06-26 09:55:32,084 INFO: ++ UNDERCLOUD_CEILOMETER_SNMPD_USER=nil >2018-06-26 09:55:32,084 INFO: +++ sudo hiera admin_password >2018-06-26 09:55:32,159 INFO: ++ UNDERCLOUD_DB_PASSWORD=password >2018-06-26 09:55:32,159 INFO: +++ sudo hiera glance::api::keystone_password >2018-06-26 09:55:32,230 INFO: ++ UNDERCLOUD_GLANCE_PASSWORD=nil >2018-06-26 09:55:32,231 INFO: +++ sudo hiera tripleo::haproxy::haproxy_stats_password >2018-06-26 09:55:32,305 INFO: ++ UNDERCLOUD_HAPROXY_STATS_PASSWORD=4569a85c881d756464be7c58da648cbf47525c5f >2018-06-26 09:55:32,305 INFO: +++ sudo hiera heat::engine::auth_encryption_key >2018-06-26 09:55:32,376 INFO: ++ UNDERCLOUD_HEAT_ENCRYPTION_KEY=e0c341aba57c764d8fe1f87be3bd740a >2018-06-26 09:55:32,376 INFO: +++ sudo hiera heat::keystone_password >2018-06-26 09:55:32,446 INFO: ++ UNDERCLOUD_HEAT_PASSWORD=nil >2018-06-26 09:55:32,446 INFO: +++ sudo hiera heat_stack_domain_admin_password >2018-06-26 09:55:32,515 INFO: ++ UNDERCLOUD_HEAT_STACK_DOMAIN_ADMIN_PASSWORD=64eb19a5abf28775789f9559dfe55300603ae9d2 >2018-06-26 09:55:32,515 INFO: +++ sudo hiera horizon_secret_key >2018-06-26 09:55:32,581 INFO: ++ UNDERCLOUD_HORIZON_SECRET_KEY=6db597390a5629fe004c362dbd964476dcc43bdb >2018-06-26 09:55:32,581 INFO: +++ sudo hiera ironic::api::authtoken::password >2018-06-26 09:55:32,647 INFO: ++ UNDERCLOUD_IRONIC_PASSWORD=1525d9a67d1b63f0360b92976cc2c4f999f80e98 >2018-06-26 09:55:32,647 INFO: +++ sudo hiera neutron::server::auth_password >2018-06-26 09:55:32,713 INFO: ++ UNDERCLOUD_NEUTRON_PASSWORD=nil >2018-06-26 09:55:32,714 INFO: +++ sudo hiera nova::keystone::authtoken::password >2018-06-26 09:55:32,789 INFO: ++ UNDERCLOUD_NOVA_PASSWORD=0dc6868d8eb5b67438581a33f6bfec9e2983a47d >2018-06-26 09:55:32,790 INFO: +++ sudo hiera rabbit_cookie >2018-06-26 09:55:32,866 INFO: ++ UNDERCLOUD_RABBIT_COOKIE=0631ad8d93548cfcad81459e26a0af979537eb83 >2018-06-26 09:55:32,866 INFO: +++ sudo hiera rabbit_password >2018-06-26 09:55:32,937 INFO: ++ UNDERCLOUD_RABBIT_PASSWORD=nil >2018-06-26 09:55:32,937 INFO: +++ sudo hiera rabbit_username >2018-06-26 09:55:33,008 INFO: ++ UNDERCLOUD_RABBIT_USERNAME=nil >2018-06-26 09:55:33,008 INFO: +++ sudo hiera swift::swift_hash_suffix >2018-06-26 09:55:33,078 INFO: ++ UNDERCLOUD_SWIFT_HASH_SUFFIX=nil >2018-06-26 09:55:33,078 INFO: +++ sudo hiera swift::proxy::authtoken::admin_password >2018-06-26 09:55:33,145 INFO: ++ UNDERCLOUD_SWIFT_PASSWORD=nil >2018-06-26 09:55:33,145 INFO: +++ sudo hiera mistral::admin_password >2018-06-26 09:55:33,211 INFO: ++ UNDERCLOUD_MISTRAL_PASSWORD=nil >2018-06-26 09:55:33,211 INFO: +++ sudo hiera zaqar::keystone::authtoken::password >2018-06-26 09:55:33,285 INFO: ++ UNDERCLOUD_ZAQAR_PASSWORD=09fcaec3a1bba72d8515c0678c5a98096f66b972 >2018-06-26 09:55:33,285 INFO: +++ sudo hiera cinder::keystone::authtoken::password >2018-06-26 09:55:33,354 INFO: ++ UNDERCLOUD_CINDER_PASSWORD=d6ce45db7dfeaaea3e3c6f1d238229cf54a3b924 >2018-06-26 09:55:33,354 INFO: + source /root/stackrc >2018-06-26 09:55:33,354 INFO: +++ set >2018-06-26 09:55:33,355 INFO: +++ awk '{FS="="} /^OS_/ {print $1}' >2018-06-26 09:55:33,356 INFO: ++ NOVA_VERSION=1.1 >2018-06-26 09:55:33,356 INFO: ++ export NOVA_VERSION >2018-06-26 09:55:33,357 INFO: ++ OS_PASSWORD=password >2018-06-26 09:55:33,357 INFO: ++ export OS_PASSWORD >2018-06-26 09:55:33,357 INFO: ++ OS_AUTH_TYPE=password >2018-06-26 09:55:33,357 INFO: ++ export OS_AUTH_TYPE >2018-06-26 09:55:33,357 INFO: ++ OS_AUTH_URL=http://192.0.3.1:5000/ >2018-06-26 09:55:33,357 INFO: ++ export OS_AUTH_URL >2018-06-26 09:55:33,357 INFO: ++ OS_USERNAME=admin >2018-06-26 09:55:33,357 INFO: ++ OS_PROJECT_NAME=admin >2018-06-26 09:55:33,358 INFO: ++ COMPUTE_API_VERSION=1.1 >2018-06-26 09:55:33,358 INFO: ++ IRONIC_API_VERSION=1.34 >2018-06-26 09:55:33,358 INFO: ++ OS_BAREMETAL_API_VERSION=1.34 >2018-06-26 09:55:33,358 INFO: ++ OS_NO_CACHE=True >2018-06-26 09:55:33,358 INFO: ++ OS_CLOUDNAME=undercloud >2018-06-26 09:55:33,358 INFO: ++ export OS_USERNAME >2018-06-26 09:55:33,358 INFO: ++ export OS_PROJECT_NAME >2018-06-26 09:55:33,358 INFO: ++ export COMPUTE_API_VERSION >2018-06-26 09:55:33,359 INFO: ++ export IRONIC_API_VERSION >2018-06-26 09:55:33,359 INFO: ++ export OS_BAREMETAL_API_VERSION >2018-06-26 09:55:33,359 INFO: ++ export OS_NO_CACHE >2018-06-26 09:55:33,359 INFO: ++ export OS_CLOUDNAME >2018-06-26 09:55:33,359 INFO: ++ OS_IDENTITY_API_VERSION=3 >2018-06-26 09:55:33,359 INFO: ++ export OS_IDENTITY_API_VERSION >2018-06-26 09:55:33,359 INFO: ++ OS_PROJECT_DOMAIN_NAME=Default >2018-06-26 09:55:33,360 INFO: ++ export OS_PROJECT_DOMAIN_NAME >2018-06-26 09:55:33,360 INFO: ++ OS_USER_DOMAIN_NAME=Default >2018-06-26 09:55:33,360 INFO: ++ export OS_USER_DOMAIN_NAME >2018-06-26 09:55:33,360 INFO: ++ '[' -z '' ']' >2018-06-26 09:55:33,360 INFO: ++ export PS1= >2018-06-26 09:55:33,360 INFO: ++ PS1= >2018-06-26 09:55:33,360 INFO: ++ export 'PS1=${OS_CLOUDNAME:+($OS_CLOUDNAME)} ' >2018-06-26 09:55:33,360 INFO: ++ PS1='${OS_CLOUDNAME:+($OS_CLOUDNAME)} ' >2018-06-26 09:55:33,361 INFO: ++ export CLOUDPROMPT_ENABLED=1 >2018-06-26 09:55:33,361 INFO: ++ CLOUDPROMPT_ENABLED=1 >2018-06-26 09:55:33,361 INFO: + INSTACK_ROOT= >2018-06-26 09:55:33,361 INFO: + export INSTACK_ROOT >2018-06-26 09:55:33,361 INFO: + '[' -n '' ']' >2018-06-26 09:55:33,361 INFO: + '[' '!' -f /root/.ssh/authorized_keys ']' >2018-06-26 09:55:33,361 INFO: + sudo mkdir -p /root/.ssh >2018-06-26 09:55:33,370 INFO: + sudo chmod 7000 /root/.ssh/ >2018-06-26 09:55:33,381 INFO: + sudo touch /root/.ssh/authorized_keys >2018-06-26 09:55:33,392 INFO: + sudo chmod 600 /root/.ssh/authorized_keys >2018-06-26 09:55:33,402 INFO: + '[' '!' -f /root/.ssh/id_rsa ']' >2018-06-26 09:55:33,403 INFO: + ssh-keygen -b 1024 -N '' -f /root/.ssh/id_rsa >2018-06-26 09:55:33,432 INFO: Generating public/private rsa key pair. >2018-06-26 09:55:33,432 INFO: Your identification has been saved in /root/.ssh/id_rsa. >2018-06-26 09:55:33,433 INFO: Your public key has been saved in /root/.ssh/id_rsa.pub. >2018-06-26 09:55:33,433 INFO: The key fingerprint is: >2018-06-26 09:55:33,433 INFO: SHA256:BcdMrMd5YT5h/RyPBeyolxnDAvr4rpnhIpSm3fFz8vg root@facebook.local.com >2018-06-26 09:55:33,433 INFO: The key's randomart image is: >2018-06-26 09:55:33,433 INFO: +---[RSA 1024]----+ >2018-06-26 09:55:33,433 INFO: | .=o o.. | >2018-06-26 09:55:33,433 INFO: | .o+ = o..| >2018-06-26 09:55:33,434 INFO: | . +.* = =o| >2018-06-26 09:55:33,434 INFO: | . ..= O o +| >2018-06-26 09:55:33,434 INFO: | . oS. + * | >2018-06-26 09:55:33,434 INFO: | + . . . . + | >2018-06-26 09:55:33,434 INFO: | = . o.. . | >2018-06-26 09:55:33,434 INFO: |. o o.+=o | >2018-06-26 09:55:33,434 INFO: | . .*OE | >2018-06-26 09:55:33,435 INFO: +----[SHA256]-----+ >2018-06-26 09:55:33,435 INFO: + cat /root/.ssh/id_rsa.pub >2018-06-26 09:55:33,435 INFO: + '[' -e /usr/sbin/getenforce ']' >2018-06-26 09:55:33,435 INFO: ++ getenforce >2018-06-26 09:55:33,443 INFO: + '[' Enforcing == Enforcing ']' >2018-06-26 09:55:33,443 INFO: + set +e >2018-06-26 09:55:33,444 INFO: ++ find /root/.ssh/ -exec ls -lZ '{}' ';' >2018-06-26 09:55:33,444 INFO: ++ grep -v ssh_home_t >2018-06-26 09:55:33,451 INFO: + selinux_wrong_permission= >2018-06-26 09:55:33,451 INFO: + set -e >2018-06-26 09:55:33,451 INFO: + '[' -n '' ']' >2018-06-26 09:55:33,451 INFO: ++ openstack project show admin >2018-06-26 09:55:33,451 INFO: ++ awk '$2=="id" {print $4}' >2018-06-26 09:55:38,190 INFO: + openstack quota set --cores -1 --instances -1 --ram -1 13835fbb8e0947a9b3fa174b9a22cdb9 >2018-06-26 09:55:50,302 INFO: + rm -rf /root/.novaclient >2018-06-26 09:55:50,307 INFO: dib-run-parts Tue Jun 26 09:55:50 IST 2018 98-undercloud-setup completed >2018-06-26 09:55:50,309 INFO: dib-run-parts Tue Jun 26 09:55:50 IST 2018 Running /usr/libexec/os-refresh-config/post-configure.d/99-refresh-completed >2018-06-26 09:55:50,312 INFO: ++ os-apply-config --key completion-handle --type raw --key-default '' >2018-06-26 09:55:50,454 INFO: [2018/06/26 09:55:50 AM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json >2018-06-26 09:55:50,460 INFO: + HANDLE= >2018-06-26 09:55:50,461 INFO: ++ os-apply-config --key completion-signal --type raw --key-default '' >2018-06-26 09:55:50,601 INFO: [2018/06/26 09:55:50 AM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json >2018-06-26 09:55:50,607 INFO: + SIGNAL= >2018-06-26 09:55:50,607 INFO: ++ os-apply-config --key instance-id --type raw --key-default '' >2018-06-26 09:55:50,750 INFO: [2018/06/26 09:55:50 AM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json >2018-06-26 09:55:50,755 INFO: + ID= >2018-06-26 09:55:50,755 INFO: + '[' -n '' ']' >2018-06-26 09:55:50,756 INFO: + exit 0 >2018-06-26 09:55:50,758 INFO: dib-run-parts Tue Jun 26 09:55:50 IST 2018 99-refresh-completed completed >2018-06-26 09:55:50,759 INFO: dib-run-parts Tue Jun 26 09:55:50 IST 2018 ----------------------- PROFILING ----------------------- >2018-06-26 09:55:50,760 INFO: dib-run-parts Tue Jun 26 09:55:50 IST 2018 >2018-06-26 09:55:50,762 INFO: dib-run-parts Tue Jun 26 09:55:50 IST 2018 Target: post-configure.d >2018-06-26 09:55:50,763 INFO: dib-run-parts Tue Jun 26 09:55:50 IST 2018 >2018-06-26 09:55:50,764 INFO: dib-run-parts Tue Jun 26 09:55:50 IST 2018 Script Seconds >2018-06-26 09:55:50,765 INFO: dib-run-parts Tue Jun 26 09:55:50 IST 2018 --------------------------------------- ---------- >2018-06-26 09:55:50,766 INFO: dib-run-parts Tue Jun 26 09:55:50 IST 2018 >2018-06-26 09:55:50,773 INFO: dib-run-parts Tue Jun 26 09:55:50 IST 2018 10-iptables 0.020 >2018-06-26 09:55:50,778 INFO: dib-run-parts Tue Jun 26 09:55:50 IST 2018 80-seedstack-masquerade 0.028 >2018-06-26 09:55:50,783 INFO: dib-run-parts Tue Jun 26 09:55:50 IST 2018 98-undercloud-setup 18.665 >2018-06-26 09:55:50,788 INFO: dib-run-parts Tue Jun 26 09:55:50 IST 2018 99-refresh-completed 0.447 >2018-06-26 09:55:50,791 INFO: dib-run-parts Tue Jun 26 09:55:50 IST 2018 >2018-06-26 09:55:50,792 INFO: dib-run-parts Tue Jun 26 09:55:50 IST 2018 --------------------- END PROFILING --------------------- >2018-06-26 09:55:50,793 INFO: [2018-06-26 09:55:50,792] (os-refresh-config) [INFO] Completed phase post-configure >2018-06-26 09:55:50,801 INFO: os-refresh-config completed successfully >2018-06-26 09:55:51,001 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:5000/ -H "Accept: application/json" -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" >2018-06-26 09:55:51,003 DEBUG: Starting new HTTP connection (1): 192.0.3.1 >2018-06-26 09:55:52,139 DEBUG: http://192.0.3.1:5000 "GET / HTTP/1.1" 300 593 >2018-06-26 09:55:52,145 DEBUG: RESP: [300] Date: Tue, 26 Jun 2018 04:25:51 GMT Server: Apache Vary: X-Auth-Token Content-Length: 593 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.0.3.1:5000/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.0.3.1:5000/v2.0/", "rel": "self"}, {"href": "https://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}} > >2018-06-26 09:55:52,146 DEBUG: Making authentication request to http://192.0.3.1:5000/v3/auth/tokens >2018-06-26 09:55:52,553 DEBUG: http://192.0.3.1:5000 "POST /v3/auth/tokens HTTP/1.1" 201 7993 >2018-06-26 09:55:52,554 DEBUG: {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "a19af673dce44d89bec07da60746e8e4", "name": "admin"}], "expires_at": "2018-06-26T08:25:52.000000Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "13835fbb8e0947a9b3fa174b9a22cdb9", "name": "admin"}, "catalog": [{"endpoints": [{"url": "http://192.0.3.1:5050", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "ab5c482d7d7a4a2dbe585fd722a6ca73"}, {"url": "http://192.0.3.1:5050", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "bb4e26d4adcd460eb44821e899be9ebb"}, {"url": "http://192.0.3.1:5050", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "dcf6a9debd8f4934aa384251e7613cb5"}], "type": "baremetal-introspection", "id": "084902dec7484ca0b731c2f39c33ab52", "name": "ironic-inspector"}, {"endpoints": [{"url": "ws://192.0.3.1:9000", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "418298d93a3544ddb99bd2015af10e45"}, {"url": "ws://192.0.3.1:9000", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "4413828ebe134d8bbad9babe9f81e7c5"}, {"url": "ws://192.0.3.1:9000", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "81fac1a734154da88c398e772f6e7cb3"}], "type": "messaging-websocket", "id": "0a6a1173fb884a5a82322e44a1fc0eea", "name": "zaqar-websocket"}, {"endpoints": [{"url": "http://192.0.3.1:8004/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "4a1d37b9994a45d4a6b041013673c2e9"}, {"url": "http://192.0.3.1:8004/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "8485f45bf105494a81c4d8ffcdbffc7d"}, {"url": "http://192.0.3.1:8004/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "fe9568bd34c94bba8d04dad0fda5435e"}], "type": "orchestration", "id": "115d8bc598754862b67fc9b7c3dcabc1", "name": "heat"}, {"endpoints": [{"url": "http://192.0.3.1:8080/v1/AUTH_13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "50904c3c2052433ca4e85e1f870a96ee"}, {"url": "http://192.0.3.1:8080/v1/AUTH_13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "826f9ad5da574268a3a9864df3423b8d"}, {"url": "http://192.0.3.1:8080", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "9bcb806ddd8f45c381a39fcb1612ef0a"}], "type": "object-store", "id": "158a9ec0b8e8442a91d539c94f7f3e0d", "name": "swift"}, {"endpoints": [{"url": "http://192.0.3.1:9696", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "8f27927fd8ea4ce29ff057a4f87484c6"}, {"url": "http://192.0.3.1:9696", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "e2f7d421188c484c8560cfc98ba36498"}, {"url": "http://192.0.3.1:9696", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "ef58d0445d78427c991ddf1935bdecca"}], "type": "network", "id": "4413143a83434a35aacc03625951c5e6", "name": "neutron"}, {"endpoints": [{"url": "http://192.0.3.1:8989/v2", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "60120820741f409a86c4fc04675e87f5"}, {"url": "http://192.0.3.1:8989/v2", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "7f57a70539474749a8732e237cd3d047"}, {"url": "http://192.0.3.1:8989/v2", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "838632e4dad7499683622be1425ae9f9"}], "type": "workflowv2", "id": "4fd514dc06964316ac0a0ce00ec69ac3", "name": "mistral"}, {"endpoints": [{"url": "http://192.0.3.1:8000/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "29f6d67693b2422da3797af84fa584d0"}, {"url": "http://192.0.3.1:8000/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "9d974513a36f4a1cb4c1a909492870f2"}, {"url": "http://192.0.3.1:8000/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "fbb25e17c719472eb5d34cad0238d098"}], "type": "cloudformation", "id": "56cff4af5f114405a3c2f0fc77a22eb3", "name": "heat-cfn"}, {"endpoints": [{"url": "http://192.0.3.1:8888", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "5e779a349b1742aabeebb6722260c17d"}, {"url": "http://192.0.3.1:8888", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "87f59b4dfb0445bca44bf310b77be097"}, {"url": "http://192.0.3.1:8888", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "980bf5c9b80b4111b5ba19dcc5274866"}], "type": "messaging", "id": "6051d4397a684f3daf43f2ec39727c26", "name": "zaqar"}, {"endpoints": [{"url": "http://192.0.3.1:8774/v2.1", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "217c1916df124498a130051b0d2929b3"}, {"url": "http://192.0.3.1:8774/v2.1", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "6e0f74f28b824f979fb5f5cc30bd3c3f"}, {"url": "http://192.0.3.1:8774/v2.1", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "ef43d40f16b24c758abce9b806f3ab04"}], "type": "compute", "id": "6670f1f004934179b4e2d17ac8ac4559", "name": "nova"}, {"endpoints": [{"url": "http://192.0.3.1:9292", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "61c209b4b8f644d191bae26716309f26"}, {"url": "http://192.0.3.1:9292", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "9447a8abbe6b4a6b86bb0299666ba978"}, {"url": "http://192.0.3.1:9292", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "dd5cb9ddfe5e496a9ae10f8dc30e3596"}], "type": "image", "id": "8d4ca6bed6b14c2e9ef1634a7f86a1bf", "name": "glance"}, {"endpoints": [{"url": "http://192.0.3.1:6385", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "68862b76576e4797ae9b44e7e920a69d"}, {"url": "http://192.0.3.1:6385", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "9b6360b588564179a2ced0f5fd842e36"}, {"url": "http://192.0.3.1:6385", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "ba8e82ab1d98411f853796bbb04778d4"}], "type": "baremetal", "id": "9f9e76a976564a1e8f0941929009e0ab", "name": "ironic"}, {"endpoints": [{"url": "http://192.0.3.1:8778/placement", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "00bb90f687b4403c8d2d4e5015504ae4"}, {"url": "http://192.0.3.1:8778/placement", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "227bf279774b40a8b6391b570de22a80"}, {"url": "http://192.0.3.1:8778/placement", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "ceaf819496d74a0496c09c9b7c9c0cd4"}], "type": "placement", "id": "ac1c0292ca3a42a1ad0ca09c9a2f2db5", "name": "placement"}, {"endpoints": [{"url": "http://192.0.3.1:5000", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "0716550d71d94a76bb684b55a29bda59"}, {"url": "http://192.0.3.1:35357", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "1d6b1d8c41204fe7a2099501c32b0288"}, {"url": "http://192.0.3.1:5000", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "e375868d7ee04e089d76ac8e49a498e3"}], "type": "identity", "id": "ce6de0f0b70b4955921edafe97432e27", "name": "keystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "6e71dffd643e4c24a0efff2673fdac32"}, "audit_ids": ["g7kN-TH1TyWLlxn3Pi-EpQ"], "issued_at": "2018-06-26T04:25:52.000000Z"}} >2018-06-26 09:55:52,576 DEBUG: found extension EntryPoint.parse('gnocchi-basic = gnocchiclient.auth:GnocchiBasicLoader') >2018-06-26 09:55:52,576 DEBUG: found extension EntryPoint.parse('gnocchi-noauth = gnocchiclient.auth:GnocchiNoAuthLoader') >2018-06-26 09:55:52,576 DEBUG: found extension EntryPoint.parse('v2token = keystoneauth1.loading._plugins.identity.v2:Token') >2018-06-26 09:55:52,576 DEBUG: found extension EntryPoint.parse('none = keystoneauth1.loading._plugins.noauth:NoAuth') >2018-06-26 09:55:52,576 DEBUG: found extension EntryPoint.parse('v3oauth1 = keystoneauth1.extras.oauth1._loading:V3OAuth1') >2018-06-26 09:55:52,576 DEBUG: found extension EntryPoint.parse('admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken') >2018-06-26 09:55:52,576 DEBUG: found extension EntryPoint.parse('v3oidcauthcode = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode') >2018-06-26 09:55:52,576 DEBUG: found extension EntryPoint.parse('v2password = keystoneauth1.loading._plugins.identity.v2:Password') >2018-06-26 09:55:52,576 DEBUG: found extension EntryPoint.parse('v3samlpassword = keystoneauth1.extras._saml2._loading:Saml2Password') >2018-06-26 09:55:52,576 DEBUG: found extension EntryPoint.parse('v3password = keystoneauth1.loading._plugins.identity.v3:Password') >2018-06-26 09:55:52,576 DEBUG: found extension EntryPoint.parse('v3adfspassword = keystoneauth1.extras._saml2._loading:ADFSPassword') >2018-06-26 09:55:52,576 DEBUG: found extension EntryPoint.parse('v3oidcaccesstoken = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken') >2018-06-26 09:55:52,576 DEBUG: found extension EntryPoint.parse('v3oidcpassword = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') >2018-06-26 09:55:52,576 DEBUG: found extension EntryPoint.parse('v3kerberos = keystoneauth1.extras.kerberos._loading:Kerberos') >2018-06-26 09:55:52,577 DEBUG: found extension EntryPoint.parse('token = keystoneauth1.loading._plugins.identity.generic:Token') >2018-06-26 09:55:52,577 DEBUG: found extension EntryPoint.parse('v3oidcclientcredentials = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClientCredentials') >2018-06-26 09:55:52,577 DEBUG: found extension EntryPoint.parse('v3tokenlessauth = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth') >2018-06-26 09:55:52,577 DEBUG: found extension EntryPoint.parse('v3token = keystoneauth1.loading._plugins.identity.v3:Token') >2018-06-26 09:55:52,577 DEBUG: found extension EntryPoint.parse('v3totp = keystoneauth1.loading._plugins.identity.v3:TOTP') >2018-06-26 09:55:52,577 DEBUG: found extension EntryPoint.parse('v3applicationcredential = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential') >2018-06-26 09:55:52,577 DEBUG: found extension EntryPoint.parse('password = keystoneauth1.loading._plugins.identity.generic:Password') >2018-06-26 09:55:52,577 DEBUG: found extension EntryPoint.parse('v3fedkerb = keystoneauth1.extras.kerberos._loading:MappedKerberos') >2018-06-26 09:55:52,578 DEBUG: found extension EntryPoint.parse('token_endpoint = openstackclient.api.auth_plugin:TokenEndpoint') >2018-06-26 09:55:52,578 DEBUG: found extension EntryPoint.parse('aodh-noauth = aodhclient.noauth:AodhNoAuthLoader') >2018-06-26 09:55:52,578 DEBUG: found extension EntryPoint.parse('noauth = cinderclient.contrib.noauth:CinderNoAuthLoader') >2018-06-26 09:55:52,578 DEBUG: found extension EntryPoint.parse('v1password = swiftclient.authv1:PasswordLoader') >2018-06-26 09:55:52,608 DEBUG: Manager defaults:unknown running task network.GET.networks >2018-06-26 09:55:52,608 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:5000/ -H "Accept: application/json" -H "User-Agent: os-client-config/1.29.0 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" >2018-06-26 09:55:52,609 DEBUG: Starting new HTTP connection (1): 192.0.3.1 >2018-06-26 09:55:52,612 DEBUG: http://192.0.3.1:5000 "GET / HTTP/1.1" 300 593 >2018-06-26 09:55:52,613 DEBUG: RESP: [300] Date: Tue, 26 Jun 2018 04:25:52 GMT Server: Apache Vary: X-Auth-Token Content-Length: 593 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.0.3.1:5000/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.0.3.1:5000/v2.0/", "rel": "self"}, {"href": "https://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}} > >2018-06-26 09:55:52,613 DEBUG: Making authentication request to http://192.0.3.1:5000/v3/auth/tokens >2018-06-26 09:55:53,262 DEBUG: http://192.0.3.1:5000 "POST /v3/auth/tokens HTTP/1.1" 201 7993 >2018-06-26 09:55:53,264 DEBUG: {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "a19af673dce44d89bec07da60746e8e4", "name": "admin"}], "expires_at": "2018-06-26T08:25:53.000000Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "13835fbb8e0947a9b3fa174b9a22cdb9", "name": "admin"}, "catalog": [{"endpoints": [{"url": "http://192.0.3.1:5050", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "ab5c482d7d7a4a2dbe585fd722a6ca73"}, {"url": "http://192.0.3.1:5050", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "bb4e26d4adcd460eb44821e899be9ebb"}, {"url": "http://192.0.3.1:5050", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "dcf6a9debd8f4934aa384251e7613cb5"}], "type": "baremetal-introspection", "id": "084902dec7484ca0b731c2f39c33ab52", "name": "ironic-inspector"}, {"endpoints": [{"url": "ws://192.0.3.1:9000", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "418298d93a3544ddb99bd2015af10e45"}, {"url": "ws://192.0.3.1:9000", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "4413828ebe134d8bbad9babe9f81e7c5"}, {"url": "ws://192.0.3.1:9000", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "81fac1a734154da88c398e772f6e7cb3"}], "type": "messaging-websocket", "id": "0a6a1173fb884a5a82322e44a1fc0eea", "name": "zaqar-websocket"}, {"endpoints": [{"url": "http://192.0.3.1:8004/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "4a1d37b9994a45d4a6b041013673c2e9"}, {"url": "http://192.0.3.1:8004/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "8485f45bf105494a81c4d8ffcdbffc7d"}, {"url": "http://192.0.3.1:8004/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "fe9568bd34c94bba8d04dad0fda5435e"}], "type": "orchestration", "id": "115d8bc598754862b67fc9b7c3dcabc1", "name": "heat"}, {"endpoints": [{"url": "http://192.0.3.1:8080/v1/AUTH_13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "50904c3c2052433ca4e85e1f870a96ee"}, {"url": "http://192.0.3.1:8080/v1/AUTH_13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "826f9ad5da574268a3a9864df3423b8d"}, {"url": "http://192.0.3.1:8080", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "9bcb806ddd8f45c381a39fcb1612ef0a"}], "type": "object-store", "id": "158a9ec0b8e8442a91d539c94f7f3e0d", "name": "swift"}, {"endpoints": [{"url": "http://192.0.3.1:9696", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "8f27927fd8ea4ce29ff057a4f87484c6"}, {"url": "http://192.0.3.1:9696", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "e2f7d421188c484c8560cfc98ba36498"}, {"url": "http://192.0.3.1:9696", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "ef58d0445d78427c991ddf1935bdecca"}], "type": "network", "id": "4413143a83434a35aacc03625951c5e6", "name": "neutron"}, {"endpoints": [{"url": "http://192.0.3.1:8989/v2", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "60120820741f409a86c4fc04675e87f5"}, {"url": "http://192.0.3.1:8989/v2", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "7f57a70539474749a8732e237cd3d047"}, {"url": "http://192.0.3.1:8989/v2", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "838632e4dad7499683622be1425ae9f9"}], "type": "workflowv2", "id": "4fd514dc06964316ac0a0ce00ec69ac3", "name": "mistral"}, {"endpoints": [{"url": "http://192.0.3.1:8000/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "29f6d67693b2422da3797af84fa584d0"}, {"url": "http://192.0.3.1:8000/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "9d974513a36f4a1cb4c1a909492870f2"}, {"url": "http://192.0.3.1:8000/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "fbb25e17c719472eb5d34cad0238d098"}], "type": "cloudformation", "id": "56cff4af5f114405a3c2f0fc77a22eb3", "name": "heat-cfn"}, {"endpoints": [{"url": "http://192.0.3.1:8888", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "5e779a349b1742aabeebb6722260c17d"}, {"url": "http://192.0.3.1:8888", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "87f59b4dfb0445bca44bf310b77be097"}, {"url": "http://192.0.3.1:8888", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "980bf5c9b80b4111b5ba19dcc5274866"}], "type": "messaging", "id": "6051d4397a684f3daf43f2ec39727c26", "name": "zaqar"}, {"endpoints": [{"url": "http://192.0.3.1:8774/v2.1", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "217c1916df124498a130051b0d2929b3"}, {"url": "http://192.0.3.1:8774/v2.1", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "6e0f74f28b824f979fb5f5cc30bd3c3f"}, {"url": "http://192.0.3.1:8774/v2.1", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "ef43d40f16b24c758abce9b806f3ab04"}], "type": "compute", "id": "6670f1f004934179b4e2d17ac8ac4559", "name": "nova"}, {"endpoints": [{"url": "http://192.0.3.1:9292", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "61c209b4b8f644d191bae26716309f26"}, {"url": "http://192.0.3.1:9292", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "9447a8abbe6b4a6b86bb0299666ba978"}, {"url": "http://192.0.3.1:9292", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "dd5cb9ddfe5e496a9ae10f8dc30e3596"}], "type": "image", "id": "8d4ca6bed6b14c2e9ef1634a7f86a1bf", "name": "glance"}, {"endpoints": [{"url": "http://192.0.3.1:6385", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "68862b76576e4797ae9b44e7e920a69d"}, {"url": "http://192.0.3.1:6385", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "9b6360b588564179a2ced0f5fd842e36"}, {"url": "http://192.0.3.1:6385", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "ba8e82ab1d98411f853796bbb04778d4"}], "type": "baremetal", "id": "9f9e76a976564a1e8f0941929009e0ab", "name": "ironic"}, {"endpoints": [{"url": "http://192.0.3.1:8778/placement", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "00bb90f687b4403c8d2d4e5015504ae4"}, {"url": "http://192.0.3.1:8778/placement", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "227bf279774b40a8b6391b570de22a80"}, {"url": "http://192.0.3.1:8778/placement", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "ceaf819496d74a0496c09c9b7c9c0cd4"}], "type": "placement", "id": "ac1c0292ca3a42a1ad0ca09c9a2f2db5", "name": "placement"}, {"endpoints": [{"url": "http://192.0.3.1:5000", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "0716550d71d94a76bb684b55a29bda59"}, {"url": "http://192.0.3.1:35357", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "1d6b1d8c41204fe7a2099501c32b0288"}, {"url": "http://192.0.3.1:5000", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "e375868d7ee04e089d76ac8e49a498e3"}], "type": "identity", "id": "ce6de0f0b70b4955921edafe97432e27", "name": "keystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "6e71dffd643e4c24a0efff2673fdac32"}, "audit_ids": ["V45JON0JReCmWjNXiHCKnw"], "issued_at": "2018-06-26T04:25:53.000000Z"}} >2018-06-26 09:55:53,265 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:9696 -H "Accept: application/json" -H "User-Agent: os-client-config/1.29.0 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" >2018-06-26 09:55:53,266 DEBUG: Starting new HTTP connection (1): 192.0.3.1 >2018-06-26 09:55:53,271 DEBUG: http://192.0.3.1:9696 "GET / HTTP/1.1" 200 118 >2018-06-26 09:55:53,272 DEBUG: RESP: [200] Content-Length: 118 Content-Type: application/json Date: Tue, 26 Jun 2018 04:25:53 GMT Connection: keep-alive >RESP BODY: {"versions": [{"status": "CURRENT", "id": "v2.0", "links": [{"href": "http://192.0.3.1:9696/v2.0/", "rel": "self"}]}]} > >2018-06-26 09:55:53,272 DEBUG: REQ: curl -g -i -X GET "http://192.0.3.1:9696/v2.0/networks?name=ctlplane" -H "User-Agent: os-client-config/1.29.0 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}27b25c91cf92010849a30c0a2d34fe40c2d2f419" >2018-06-26 09:55:54,116 DEBUG: http://192.0.3.1:9696 "GET /v2.0/networks?name=ctlplane HTTP/1.1" 200 15 >2018-06-26 09:55:54,117 DEBUG: RESP: [200] Content-Type: application/json Content-Length: 15 X-Openstack-Request-Id: req-64d99dd8-85fb-4ff2-bd35-8d9d09937d99 Date: Tue, 26 Jun 2018 04:25:54 GMT Connection: keep-alive >RESP BODY: {"networks":[]} > >2018-06-26 09:55:54,117 DEBUG: GET call to network for http://192.0.3.1:9696/v2.0/networks?name=ctlplane used request id req-64d99dd8-85fb-4ff2-bd35-8d9d09937d99 >2018-06-26 09:55:54,117 DEBUG: Manager defaults:unknown ran task network.GET.networks in 1.50906586647s >2018-06-26 09:55:54,118 DEBUG: Manager defaults:unknown running task network.POST.networks >2018-06-26 09:55:54,119 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:9696/v2.0/networks -H "User-Agent: os-client-config/1.29.0 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "Content-Type: application/json" -H "X-Auth-Token: {SHA1}27b25c91cf92010849a30c0a2d34fe40c2d2f419" -d '{"network": {"mtu": 1500, "name": "ctlplane", "provider:physical_network": "ctlplane", "provider:network_type": "flat"}}' >2018-06-26 09:55:54,505 DEBUG: http://192.0.3.1:9696 "POST /v2.0/networks HTTP/1.1" 201 648 >2018-06-26 09:55:54,506 DEBUG: RESP: [201] Content-Type: application/json Content-Length: 648 X-Openstack-Request-Id: req-c865a0a6-2f48-4f8a-84ec-1b2d535d45f0 Date: Tue, 26 Jun 2018 04:25:54 GMT Connection: keep-alive >RESP BODY: {"network":{"provider:physical_network":"ctlplane","ipv6_address_scope":null,"revision_number":2,"port_security_enabled":true,"provider:network_type":"flat","id":"48742777-a2f8-4d43-915d-297b118c7e21","router:external":false,"availability_zone_hints":[],"availability_zones":[],"ipv4_address_scope":null,"shared":false,"project_id":"13835fbb8e0947a9b3fa174b9a22cdb9","l2_adjacency":true,"status":"ACTIVE","subnets":[],"description":"","tags":[],"updated_at":"2018-06-26T04:25:54Z","provider:segmentation_id":null,"name":"ctlplane","admin_state_up":true,"tenant_id":"13835fbb8e0947a9b3fa174b9a22cdb9","created_at":"2018-06-26T04:25:54Z","mtu":1500}} > >2018-06-26 09:55:54,506 DEBUG: POST call to network for http://192.0.3.1:9696/v2.0/networks used request id req-c865a0a6-2f48-4f8a-84ec-1b2d535d45f0 >2018-06-26 09:55:54,506 DEBUG: Manager defaults:unknown ran task network.POST.networks in 0.387829065323s >2018-06-26 09:55:54,507 INFO: Network created openstack.network.v2.network.Network(provider:physical_network=ctlplane, ipv6_address_scope=None, revision_number=2, port_security_enabled=True, provider:network_type=flat, id=48742777-a2f8-4d43-915d-297b118c7e21, router:external=False, availability_zone_hints=[], availability_zones=[], ipv4_address_scope=None, shared=False, project_id=13835fbb8e0947a9b3fa174b9a22cdb9, status=ACTIVE, subnets=[], description=, tags=[], updated_at=2018-06-26T04:25:54Z, provider:segmentation_id=None, name=ctlplane, admin_state_up=True, created_at=2018-06-26T04:25:54Z, mtu=1500) >2018-06-26 09:55:54,507 DEBUG: Manager defaults:unknown running task network.GET.segments >2018-06-26 09:55:54,509 DEBUG: REQ: curl -g -i -X GET "http://192.0.3.1:9696/v2.0/segments?network_id=48742777-a2f8-4d43-915d-297b118c7e21" -H "User-Agent: os-client-config/1.29.0 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}27b25c91cf92010849a30c0a2d34fe40c2d2f419" >2018-06-26 09:55:54,527 DEBUG: http://192.0.3.1:9696 "GET /v2.0/segments?network_id=48742777-a2f8-4d43-915d-297b118c7e21 HTTP/1.1" 200 232 >2018-06-26 09:55:54,527 DEBUG: RESP: [200] Content-Type: application/json Content-Length: 232 X-Openstack-Request-Id: req-b3fa5615-6492-494d-a7e1-62d36404f392 Date: Tue, 26 Jun 2018 04:25:54 GMT Connection: keep-alive >RESP BODY: {"segments": [{"name": null, "network_id": "48742777-a2f8-4d43-915d-297b118c7e21", "segmentation_id": null, "network_type": "flat", "physical_network": "ctlplane", "id": "b6d8d447-5b51-4fed-82b9-5ec124cc8450", "description": null}]} > >2018-06-26 09:55:54,527 DEBUG: GET call to network for http://192.0.3.1:9696/v2.0/segments?network_id=48742777-a2f8-4d43-915d-297b118c7e21 used request id req-b3fa5615-6492-494d-a7e1-62d36404f392 >2018-06-26 09:55:54,527 DEBUG: Manager defaults:unknown ran task network.GET.segments in 0.0198390483856s >2018-06-26 09:55:54,528 DEBUG: Manager defaults:unknown running task network.DELETE.segments >2018-06-26 09:55:54,529 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:9696/v2.0/segments/b6d8d447-5b51-4fed-82b9-5ec124cc8450 -H "User-Agent: os-client-config/1.29.0 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "Accept: " -H "X-Auth-Token: {SHA1}27b25c91cf92010849a30c0a2d34fe40c2d2f419" >2018-06-26 09:55:54,954 DEBUG: http://192.0.3.1:9696 "DELETE /v2.0/segments/b6d8d447-5b51-4fed-82b9-5ec124cc8450 HTTP/1.1" 204 0 >2018-06-26 09:55:54,955 DEBUG: RESP: [204] X-Openstack-Request-Id: req-67e80a73-d195-4d46-9dda-c0496c3a24a0 Content-Length: 0 Date: Tue, 26 Jun 2018 04:25:54 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 09:55:54,955 DEBUG: DELETE call to network for http://192.0.3.1:9696/v2.0/segments/b6d8d447-5b51-4fed-82b9-5ec124cc8450 used request id req-67e80a73-d195-4d46-9dda-c0496c3a24a0 >2018-06-26 09:55:54,955 DEBUG: Manager defaults:unknown ran task network.DELETE.segments in 0.427300930023s >2018-06-26 09:55:54,956 INFO: Default segment on network ctlplane deleted. >2018-06-26 09:55:54,956 DEBUG: Manager defaults:unknown running task network.GET.subnets >2018-06-26 09:55:54,958 DEBUG: REQ: curl -g -i -X GET "http://192.0.3.1:9696/v2.0/subnets?network_id=48742777-a2f8-4d43-915d-297b118c7e21&cidr=192.0.3.0%2F24" -H "User-Agent: os-client-config/1.29.0 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}27b25c91cf92010849a30c0a2d34fe40c2d2f419" >2018-06-26 09:55:54,987 DEBUG: http://192.0.3.1:9696 "GET /v2.0/subnets?network_id=48742777-a2f8-4d43-915d-297b118c7e21&cidr=192.0.3.0%2F24 HTTP/1.1" 200 14 >2018-06-26 09:55:54,987 DEBUG: RESP: [200] Content-Type: application/json Content-Length: 14 X-Openstack-Request-Id: req-e8d81341-470b-4e83-a766-def5d4946c7f Date: Tue, 26 Jun 2018 04:25:54 GMT Connection: keep-alive >RESP BODY: {"subnets":[]} > >2018-06-26 09:55:54,987 DEBUG: GET call to network for http://192.0.3.1:9696/v2.0/subnets?network_id=48742777-a2f8-4d43-915d-297b118c7e21&cidr=192.0.3.0%2F24 used request id req-e8d81341-470b-4e83-a766-def5d4946c7f >2018-06-26 09:55:54,987 DEBUG: Manager defaults:unknown ran task network.GET.subnets in 0.030914068222s >2018-06-26 09:55:54,988 DEBUG: Manager defaults:unknown running task network.GET.subnets >2018-06-26 09:55:54,989 DEBUG: REQ: curl -g -i -X GET "http://192.0.3.1:9696/v2.0/subnets?network_id=48742777-a2f8-4d43-915d-297b118c7e21&cidr=192.0.3.0%2F24" -H "User-Agent: os-client-config/1.29.0 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}27b25c91cf92010849a30c0a2d34fe40c2d2f419" >2018-06-26 09:55:55,016 DEBUG: http://192.0.3.1:9696 "GET /v2.0/subnets?network_id=48742777-a2f8-4d43-915d-297b118c7e21&cidr=192.0.3.0%2F24 HTTP/1.1" 200 14 >2018-06-26 09:55:55,016 DEBUG: RESP: [200] Content-Type: application/json Content-Length: 14 X-Openstack-Request-Id: req-db1460aa-500d-46c3-bb15-66652d6012f2 Date: Tue, 26 Jun 2018 04:25:55 GMT Connection: keep-alive >RESP BODY: {"subnets":[]} > >2018-06-26 09:55:55,016 DEBUG: GET call to network for http://192.0.3.1:9696/v2.0/subnets?network_id=48742777-a2f8-4d43-915d-297b118c7e21&cidr=192.0.3.0%2F24 used request id req-db1460aa-500d-46c3-bb15-66652d6012f2 >2018-06-26 09:55:55,017 DEBUG: Manager defaults:unknown ran task network.GET.subnets in 0.0283598899841s >2018-06-26 09:55:55,017 DEBUG: Manager defaults:unknown running task network.GET.segments >2018-06-26 09:55:55,018 DEBUG: REQ: curl -g -i -X GET "http://192.0.3.1:9696/v2.0/segments?network_id=48742777-a2f8-4d43-915d-297b118c7e21&physical_network=ctlplane" -H "User-Agent: os-client-config/1.29.0 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}27b25c91cf92010849a30c0a2d34fe40c2d2f419" >2018-06-26 09:55:55,033 DEBUG: http://192.0.3.1:9696 "GET /v2.0/segments?network_id=48742777-a2f8-4d43-915d-297b118c7e21&physical_network=ctlplane HTTP/1.1" 200 16 >2018-06-26 09:55:55,034 DEBUG: RESP: [200] Content-Type: application/json Content-Length: 16 X-Openstack-Request-Id: req-acb418d5-b1a3-4d98-b16c-23f5fc03cf07 Date: Tue, 26 Jun 2018 04:25:55 GMT Connection: keep-alive >RESP BODY: {"segments": []} > >2018-06-26 09:55:55,034 DEBUG: GET call to network for http://192.0.3.1:9696/v2.0/segments?network_id=48742777-a2f8-4d43-915d-297b118c7e21&physical_network=ctlplane used request id req-acb418d5-b1a3-4d98-b16c-23f5fc03cf07 >2018-06-26 09:55:55,034 DEBUG: Manager defaults:unknown ran task network.GET.segments in 0.0169010162354s >2018-06-26 09:55:55,034 DEBUG: Manager defaults:unknown running task network.POST.segments >2018-06-26 09:55:55,036 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:9696/v2.0/segments -H "User-Agent: os-client-config/1.29.0 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "Content-Type: application/json" -H "X-Auth-Token: {SHA1}27b25c91cf92010849a30c0a2d34fe40c2d2f419" -d '{"segment": {"network_id": "48742777-a2f8-4d43-915d-297b118c7e21", "physical_network": "ctlplane", "name": "ctlplane-subnet", "network_type": "flat"}}' >2018-06-26 09:55:55,346 DEBUG: http://192.0.3.1:9696 "POST /v2.0/segments HTTP/1.1" 201 242 >2018-06-26 09:55:55,347 DEBUG: RESP: [201] Content-Type: application/json Content-Length: 242 X-Openstack-Request-Id: req-3de01e13-7c2b-4d6f-b240-1c3f89ebe130 Date: Tue, 26 Jun 2018 04:25:55 GMT Connection: keep-alive >RESP BODY: {"segment": {"name": "ctlplane-subnet", "network_id": "48742777-a2f8-4d43-915d-297b118c7e21", "segmentation_id": null, "network_type": "flat", "physical_network": "ctlplane", "id": "3bc164e5-39c0-4f3a-9365-1499c674f635", "description": null}} > >2018-06-26 09:55:55,347 DEBUG: POST call to network for http://192.0.3.1:9696/v2.0/segments used request id req-3de01e13-7c2b-4d6f-b240-1c3f89ebe130 >2018-06-26 09:55:55,347 DEBUG: Manager defaults:unknown ran task network.POST.segments in 0.312657833099s >2018-06-26 09:55:55,347 INFO: Neutron Segment created openstack.network.v2.segment.Segment(name=ctlplane-subnet, network_id=48742777-a2f8-4d43-915d-297b118c7e21, segmentation_id=None, id=3bc164e5-39c0-4f3a-9365-1499c674f635, physical_network=ctlplane, network_type=flat, description=None) >2018-06-26 09:55:55,348 DEBUG: Manager defaults:unknown running task network.POST.subnets >2018-06-26 09:55:55,350 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:9696/v2.0/subnets -H "User-Agent: os-client-config/1.29.0 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "Content-Type: application/json" -H "X-Auth-Token: {SHA1}27b25c91cf92010849a30c0a2d34fe40c2d2f419" -d '{"subnet": {"name": "ctlplane-subnet", "enable_dhcp": true, "segment_id": null, "network_id": "48742777-a2f8-4d43-915d-297b118c7e21", "allocation_pools": [{"start": "192.0.3.5", "end": "192.0.3.24"}], "host_routes": [{"nexthop": "192.0.3.1", "destination": "169.254.169.254/32"}], "ip_version": "4", "gateway_ip": "192.0.3.1", "cidr": "192.0.3.0/24"}}' >2018-06-26 09:55:55,697 DEBUG: http://192.0.3.1:9696 "POST /v2.0/subnets HTTP/1.1" 201 689 >2018-06-26 09:55:55,698 DEBUG: RESP: [201] Content-Type: application/json Content-Length: 689 X-Openstack-Request-Id: req-2cd23bce-2d47-4769-8fb9-bd2049c10b90 Date: Tue, 26 Jun 2018 04:25:55 GMT Connection: keep-alive >RESP BODY: {"subnet":{"updated_at":"2018-06-26T04:25:55Z","ipv6_ra_mode":null,"allocation_pools":[{"start":"192.0.3.5","end":"192.0.3.24"}],"host_routes":[{"destination":"169.254.169.254/32","nexthop":"192.0.3.1"}],"revision_number":0,"ipv6_address_mode":null,"id":"332dbcc3-3d16-4e17-bcf5-1aed566bcee7","dns_nameservers":[],"gateway_ip":"192.0.3.1","project_id":"13835fbb8e0947a9b3fa174b9a22cdb9","description":"","tags":[],"cidr":"192.0.3.0/24","subnetpool_id":null,"service_types":[],"name":"ctlplane-subnet","enable_dhcp":true,"segment_id":null,"network_id":"48742777-a2f8-4d43-915d-297b118c7e21","tenant_id":"13835fbb8e0947a9b3fa174b9a22cdb9","created_at":"2018-06-26T04:25:55Z","ip_version":4}} > >2018-06-26 09:55:55,698 DEBUG: POST call to network for http://192.0.3.1:9696/v2.0/subnets used request id req-2cd23bce-2d47-4769-8fb9-bd2049c10b90 >2018-06-26 09:55:55,698 DEBUG: Manager defaults:unknown ran task network.POST.subnets in 0.349817037582s >2018-06-26 09:55:55,699 INFO: Subnet created openstack.network.v2.subnet.Subnet(service_types=[], description=, enable_dhcp=True, tags=[], network_id=48742777-a2f8-4d43-915d-297b118c7e21, tenant_id=13835fbb8e0947a9b3fa174b9a22cdb9, created_at=2018-06-26T04:25:55Z, segment_id=None, dns_nameservers=[], updated_at=2018-06-26T04:25:55Z, gateway_ip=192.0.3.1, ipv6_ra_mode=None, allocation_pools=[{u'start': u'192.0.3.5', u'end': u'192.0.3.24'}], host_routes=[{u'nexthop': u'192.0.3.1', u'destination': u'169.254.169.254/32'}], revision_number=0, ip_version=4, ipv6_address_mode=None, cidr=192.0.3.0/24, id=332dbcc3-3d16-4e17-bcf5-1aed566bcee7, subnetpool_id=None, name=ctlplane-subnet) >2018-06-26 09:55:56,097 INFO: Generated new ssh key in ~/.ssh/id_rsa >2018-06-26 09:55:56,099 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8774/v2.1/os-keypairs/default -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:55:56,100 DEBUG: Starting new HTTP connection (1): 192.0.3.1 >2018-06-26 09:56:01,806 DEBUG: http://192.0.3.1:8774 "GET /v2.1/os-keypairs/default HTTP/1.1" 404 113 >2018-06-26 09:56:01,808 DEBUG: RESP: [404] Date: Tue, 26 Jun 2018 04:25:56 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version x-openstack-request-id: req-4901e026-efaf-4503-8a83-c7ebb7670f16 x-compute-request-id: req-4901e026-efaf-4503-8a83-c7ebb7670f16 Content-Length: 113 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: application/json; charset=UTF-8 >RESP BODY: {"itemNotFound": {"message": "Keypair default not found for user 6e71dffd643e4c24a0efff2673fdac32", "code": 404}} > >2018-06-26 09:56:01,808 DEBUG: GET call to compute for http://192.0.3.1:8774/v2.1/os-keypairs/default used request id req-4901e026-efaf-4503-8a83-c7ebb7670f16 >2018-06-26 09:56:01,810 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/os-keypairs -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '{"keypair": {"public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAnRvrF8qTXSZberCM0HevnZssGDRXpXNMBGnB+94RdZaQWMLBWRPbCacBPwKg+gBhN+B4PfWXFI8+wtJj0ED0/nD3coxMtUUvO8aM0it7Wiof3vG09P+J6wkFeah9I/RxWqa2tHVM20aiIyv4J9i+F0xQNtaJcEOG2AaEoZzOul1zFlkOf7QskMf4RcqxJStOorTCX29zEB79NwL2cO8rMLefQkNlCVF9k2lmtgDFPBkIN6eqwVl+BcgjxRYyjZEOrZyI7ZpMmay09x9XGEzUj9JC+Bf1DZltmoPz/8lQp3QvGCSI23PnpQC8tTDCAnvV358mkCZX+l8vftPU/hSH sudheer@facebook.local.com", "name": "default"}}' >2018-06-26 09:56:07,476 DEBUG: http://192.0.3.1:8774 "POST /v2.1/os-keypairs HTTP/1.1" 200 478 >2018-06-26 09:56:07,478 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 04:26:01 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-a13b81b7-94a6-4322-8232-880165154351 x-compute-request-id: req-a13b81b7-94a6-4322-8232-880165154351 Content-Encoding: gzip Content-Length: 478 Keep-Alive: timeout=15, max=99 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"keypair": {"public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAnRvrF8qTXSZberCM0HevnZssGDRXpXNMBGnB+94RdZaQWMLBWRPbCacBPwKg+gBhN+B4PfWXFI8+wtJj0ED0/nD3coxMtUUvO8aM0it7Wiof3vG09P+J6wkFeah9I/RxWqa2tHVM20aiIyv4J9i+F0xQNtaJcEOG2AaEoZzOul1zFlkOf7QskMf4RcqxJStOorTCX29zEB79NwL2cO8rMLefQkNlCVF9k2lmtgDFPBkIN6eqwVl+BcgjxRYyjZEOrZyI7ZpMmay09x9XGEzUj9JC+Bf1DZltmoPz/8lQp3QvGCSI23PnpQC8tTDCAnvV358mkCZX+l8vftPU/hSH sudheer@facebook.local.com", "user_id": "6e71dffd643e4c24a0efff2673fdac32", "name": "default", "fingerprint": "c6:1c:5d:f7:80:25:f9:b2:e8:66:6d:da:8e:95:fc:a7"}} > >2018-06-26 09:56:07,478 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/os-keypairs used request id req-a13b81b7-94a6-4322-8232-880165154351 >2018-06-26 09:56:07,501 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8774/v2.1/flavors/detail -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:56:13,142 DEBUG: http://192.0.3.1:8774 "GET /v2.1/flavors/detail HTTP/1.1" 200 15 >2018-06-26 09:56:13,143 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 04:26:07 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version x-openstack-request-id: req-7e37a2d5-78f8-48b9-a069-92abcd1d4ff1 x-compute-request-id: req-7e37a2d5-78f8-48b9-a069-92abcd1d4ff1 Content-Length: 15 Keep-Alive: timeout=15, max=98 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"flavors": []} > >2018-06-26 09:56:13,143 DEBUG: GET call to compute for http://192.0.3.1:8774/v2.1/flavors/detail used request id req-7e37a2d5-78f8-48b9-a069-92abcd1d4ff1 >2018-06-26 09:56:13,144 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:6385/v1/nodes/?fields=uuid,resource_class -H "X-OpenStack-Ironic-API-Version: 1.21" -H "User-Agent: python-ironicclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:56:13,145 DEBUG: Starting new HTTP connection (1): 192.0.3.1 >2018-06-26 09:56:15,146 DEBUG: http://192.0.3.1:6385 "GET /v1/nodes/?fields=uuid,resource_class HTTP/1.1" 200 13 >2018-06-26 09:56:15,147 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 04:26:13 GMT Server: Apache X-OpenStack-Ironic-API-Minimum-Version: 1.1 X-OpenStack-Ironic-API-Maximum-Version: 1.38 X-OpenStack-Ironic-API-Version: 1.21 Openstack-Request-Id: req-9d50c9b7-83a1-4713-83fb-18cd6bf844a3 Content-Length: 13 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"nodes": []} > >2018-06-26 09:56:15,148 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8774/v2.1/flavors/detail -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:56:15,320 DEBUG: http://192.0.3.1:8774 "GET /v2.1/flavors/detail HTTP/1.1" 200 15 >2018-06-26 09:56:15,321 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 04:26:15 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version x-openstack-request-id: req-fa85f547-d345-45b3-96b8-c0a1937b22cb x-compute-request-id: req-fa85f547-d345-45b3-96b8-c0a1937b22cb Content-Length: 15 Keep-Alive: timeout=15, max=97 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"flavors": []} > >2018-06-26 09:56:15,321 DEBUG: GET call to compute for http://192.0.3.1:8774/v2.1/flavors/detail used request id req-fa85f547-d345-45b3-96b8-c0a1937b22cb >2018-06-26 09:56:15,322 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/flavors -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '{"flavor": {"vcpus": 1, "disk": 40, "name": "baremetal", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "ram": 4096, "id": null, "swap": 0}}' >2018-06-26 09:56:20,961 DEBUG: http://192.0.3.1:8774 "POST /v2.1/flavors HTTP/1.1" 200 280 >2018-06-26 09:56:20,962 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 04:26:15 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-bbaffefd-bca7-4a84-920d-93d502977207 x-compute-request-id: req-bbaffefd-bca7-4a84-920d-93d502977207 Content-Encoding: gzip Content-Length: 280 Keep-Alive: timeout=15, max=96 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"flavor": {"name": "baremetal", "links": [{"href": "http://192.0.3.1:8774/v2.1/flavors/eb2b4c19-4dc3-4219-a407-921d5349dee3", "rel": "self"}, {"href": "http://192.0.3.1:8774/flavors/eb2b4c19-4dc3-4219-a407-921d5349dee3", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "eb2b4c19-4dc3-4219-a407-921d5349dee3"}} > >2018-06-26 09:56:20,962 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/flavors used request id req-bbaffefd-bca7-4a84-920d-93d502977207 >2018-06-26 09:56:20,964 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/flavors/eb2b4c19-4dc3-4219-a407-921d5349dee3/os-extra_specs -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '{"extra_specs": {"resources:CUSTOM_BAREMETAL": "1", "resources:DISK_GB": "0", "capabilities:boot_option": "local", "resources:MEMORY_MB": "0", "resources:VCPU": "0"}}' >2018-06-26 09:56:26,857 DEBUG: http://192.0.3.1:8774 "POST /v2.1/flavors/eb2b4c19-4dc3-4219-a407-921d5349dee3/os-extra_specs HTTP/1.1" 200 136 >2018-06-26 09:56:26,858 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 04:26:20 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-fe0ce0ae-1a9c-401a-b6d0-150a647c19d1 x-compute-request-id: req-fe0ce0ae-1a9c-401a-b6d0-150a647c19d1 Content-Encoding: gzip Content-Length: 136 Keep-Alive: timeout=15, max=95 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"extra_specs": {"resources:CUSTOM_BAREMETAL": "1", "resources:DISK_GB": "0", "capabilities:boot_option": "local", "resources:MEMORY_MB": "0", "resources:VCPU": "0"}} > >2018-06-26 09:56:26,858 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/flavors/eb2b4c19-4dc3-4219-a407-921d5349dee3/os-extra_specs used request id req-fe0ce0ae-1a9c-401a-b6d0-150a647c19d1 >2018-06-26 09:56:26,858 INFO: Created flavor "baremetal" with profile "None" >2018-06-26 09:56:26,859 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/flavors -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '{"flavor": {"vcpus": 1, "disk": 40, "name": "control", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "ram": 4096, "id": null, "swap": 0}}' >2018-06-26 09:56:32,603 DEBUG: http://192.0.3.1:8774 "POST /v2.1/flavors HTTP/1.1" 200 281 >2018-06-26 09:56:32,603 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 04:26:26 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-0ee72a25-aca2-489a-86ab-134d3561b368 x-compute-request-id: req-0ee72a25-aca2-489a-86ab-134d3561b368 Content-Encoding: gzip Content-Length: 281 Keep-Alive: timeout=15, max=94 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"flavor": {"name": "control", "links": [{"href": "http://192.0.3.1:8774/v2.1/flavors/75e8eb94-aee7-482a-80b3-d97ac8e2fb47", "rel": "self"}, {"href": "http://192.0.3.1:8774/flavors/75e8eb94-aee7-482a-80b3-d97ac8e2fb47", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "75e8eb94-aee7-482a-80b3-d97ac8e2fb47"}} > >2018-06-26 09:56:32,603 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/flavors used request id req-0ee72a25-aca2-489a-86ab-134d3561b368 >2018-06-26 09:56:32,605 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/flavors/75e8eb94-aee7-482a-80b3-d97ac8e2fb47/os-extra_specs -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '{"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "control", "resources:MEMORY_MB": "0", "resources:VCPU": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:DISK_GB": "0"}}' >2018-06-26 09:56:32,665 DEBUG: http://192.0.3.1:8774 "POST /v2.1/flavors/75e8eb94-aee7-482a-80b3-d97ac8e2fb47/os-extra_specs HTTP/1.1" 200 149 >2018-06-26 09:56:32,666 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 04:26:32 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-2076485d-8d44-47ae-92f7-958a27f61e6f x-compute-request-id: req-2076485d-8d44-47ae-92f7-958a27f61e6f Content-Encoding: gzip Content-Length: 149 Keep-Alive: timeout=15, max=93 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "control", "resources:MEMORY_MB": "0", "resources:DISK_GB": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:VCPU": "0"}} > >2018-06-26 09:56:32,666 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/flavors/75e8eb94-aee7-482a-80b3-d97ac8e2fb47/os-extra_specs used request id req-2076485d-8d44-47ae-92f7-958a27f61e6f >2018-06-26 09:56:32,667 INFO: Created flavor "control" with profile "control" >2018-06-26 09:56:32,668 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/flavors -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '{"flavor": {"vcpus": 1, "disk": 40, "name": "compute", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "ram": 4096, "id": null, "swap": 0}}' >2018-06-26 09:56:32,710 DEBUG: http://192.0.3.1:8774 "POST /v2.1/flavors HTTP/1.1" 200 281 >2018-06-26 09:56:32,711 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 04:26:32 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-ca260e6d-cbc0-4157-820f-ebf801d7cf5f x-compute-request-id: req-ca260e6d-cbc0-4157-820f-ebf801d7cf5f Content-Encoding: gzip Content-Length: 281 Keep-Alive: timeout=15, max=92 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"flavor": {"name": "compute", "links": [{"href": "http://192.0.3.1:8774/v2.1/flavors/eca892fc-d33a-408d-9611-e9fee658ce88", "rel": "self"}, {"href": "http://192.0.3.1:8774/flavors/eca892fc-d33a-408d-9611-e9fee658ce88", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "eca892fc-d33a-408d-9611-e9fee658ce88"}} > >2018-06-26 09:56:32,711 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/flavors used request id req-ca260e6d-cbc0-4157-820f-ebf801d7cf5f >2018-06-26 09:56:32,713 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/flavors/eca892fc-d33a-408d-9611-e9fee658ce88/os-extra_specs -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '{"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "compute", "resources:MEMORY_MB": "0", "resources:VCPU": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:DISK_GB": "0"}}' >2018-06-26 09:56:32,752 DEBUG: http://192.0.3.1:8774 "POST /v2.1/flavors/eca892fc-d33a-408d-9611-e9fee658ce88/os-extra_specs HTTP/1.1" 200 150 >2018-06-26 09:56:32,753 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 04:26:32 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-01cec9c8-880b-4799-91e2-8d2ff985140a x-compute-request-id: req-01cec9c8-880b-4799-91e2-8d2ff985140a Content-Encoding: gzip Content-Length: 150 Keep-Alive: timeout=15, max=91 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "compute", "resources:MEMORY_MB": "0", "resources:DISK_GB": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:VCPU": "0"}} > >2018-06-26 09:56:32,753 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/flavors/eca892fc-d33a-408d-9611-e9fee658ce88/os-extra_specs used request id req-01cec9c8-880b-4799-91e2-8d2ff985140a >2018-06-26 09:56:32,753 INFO: Created flavor "compute" with profile "compute" >2018-06-26 09:56:32,754 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/flavors -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '{"flavor": {"vcpus": 1, "disk": 40, "name": "ceph-storage", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "ram": 4096, "id": null, "swap": 0}}' >2018-06-26 09:56:32,781 DEBUG: http://192.0.3.1:8774 "POST /v2.1/flavors HTTP/1.1" 200 285 >2018-06-26 09:56:32,781 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 04:26:32 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-2b5109fb-a0b5-457f-b5d1-97e4bc21a378 x-compute-request-id: req-2b5109fb-a0b5-457f-b5d1-97e4bc21a378 Content-Encoding: gzip Content-Length: 285 Keep-Alive: timeout=15, max=90 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"flavor": {"name": "ceph-storage", "links": [{"href": "http://192.0.3.1:8774/v2.1/flavors/0cfb511e-5a16-435a-9a69-6982cebe033a", "rel": "self"}, {"href": "http://192.0.3.1:8774/flavors/0cfb511e-5a16-435a-9a69-6982cebe033a", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "0cfb511e-5a16-435a-9a69-6982cebe033a"}} > >2018-06-26 09:56:32,782 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/flavors used request id req-2b5109fb-a0b5-457f-b5d1-97e4bc21a378 >2018-06-26 09:56:32,783 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/flavors/0cfb511e-5a16-435a-9a69-6982cebe033a/os-extra_specs -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '{"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "ceph-storage", "resources:MEMORY_MB": "0", "resources:VCPU": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:DISK_GB": "0"}}' >2018-06-26 09:56:32,822 DEBUG: http://192.0.3.1:8774 "POST /v2.1/flavors/0cfb511e-5a16-435a-9a69-6982cebe033a/os-extra_specs HTTP/1.1" 200 153 >2018-06-26 09:56:32,823 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 04:26:32 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-3c4969a7-cbd1-4468-85cc-e41775755810 x-compute-request-id: req-3c4969a7-cbd1-4468-85cc-e41775755810 Content-Encoding: gzip Content-Length: 153 Keep-Alive: timeout=15, max=89 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "ceph-storage", "resources:MEMORY_MB": "0", "resources:DISK_GB": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:VCPU": "0"}} > >2018-06-26 09:56:32,823 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/flavors/0cfb511e-5a16-435a-9a69-6982cebe033a/os-extra_specs used request id req-3c4969a7-cbd1-4468-85cc-e41775755810 >2018-06-26 09:56:32,823 INFO: Created flavor "ceph-storage" with profile "ceph-storage" >2018-06-26 09:56:32,824 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/flavors -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '{"flavor": {"vcpus": 1, "disk": 40, "name": "block-storage", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "ram": 4096, "id": null, "swap": 0}}' >2018-06-26 09:56:32,870 DEBUG: http://192.0.3.1:8774 "POST /v2.1/flavors HTTP/1.1" 200 286 >2018-06-26 09:56:32,871 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 04:26:32 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-43a1dad9-3d7e-420b-b45b-cdfeab832a64 x-compute-request-id: req-43a1dad9-3d7e-420b-b45b-cdfeab832a64 Content-Encoding: gzip Content-Length: 286 Keep-Alive: timeout=15, max=88 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"flavor": {"name": "block-storage", "links": [{"href": "http://192.0.3.1:8774/v2.1/flavors/bbfe7233-396d-4aa2-b008-5b64ea0e7329", "rel": "self"}, {"href": "http://192.0.3.1:8774/flavors/bbfe7233-396d-4aa2-b008-5b64ea0e7329", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "bbfe7233-396d-4aa2-b008-5b64ea0e7329"}} > >2018-06-26 09:56:32,871 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/flavors used request id req-43a1dad9-3d7e-420b-b45b-cdfeab832a64 >2018-06-26 09:56:32,872 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/flavors/bbfe7233-396d-4aa2-b008-5b64ea0e7329/os-extra_specs -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '{"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "block-storage", "resources:MEMORY_MB": "0", "resources:VCPU": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:DISK_GB": "0"}}' >2018-06-26 09:56:32,908 DEBUG: http://192.0.3.1:8774 "POST /v2.1/flavors/bbfe7233-396d-4aa2-b008-5b64ea0e7329/os-extra_specs HTTP/1.1" 200 154 >2018-06-26 09:56:32,909 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 04:26:32 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-eb445a31-7eca-40a8-b75e-067a0025149e x-compute-request-id: req-eb445a31-7eca-40a8-b75e-067a0025149e Content-Encoding: gzip Content-Length: 154 Keep-Alive: timeout=15, max=87 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "block-storage", "resources:MEMORY_MB": "0", "resources:DISK_GB": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:VCPU": "0"}} > >2018-06-26 09:56:32,909 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/flavors/bbfe7233-396d-4aa2-b008-5b64ea0e7329/os-extra_specs used request id req-eb445a31-7eca-40a8-b75e-067a0025149e >2018-06-26 09:56:32,909 INFO: Created flavor "block-storage" with profile "block-storage" >2018-06-26 09:56:32,910 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/flavors -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '{"flavor": {"vcpus": 1, "disk": 40, "name": "swift-storage", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "ram": 4096, "id": null, "swap": 0}}' >2018-06-26 09:56:32,944 DEBUG: http://192.0.3.1:8774 "POST /v2.1/flavors HTTP/1.1" 200 284 >2018-06-26 09:56:32,945 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 04:26:32 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-674a69dc-d965-45c0-b41f-73ed46291e9a x-compute-request-id: req-674a69dc-d965-45c0-b41f-73ed46291e9a Content-Encoding: gzip Content-Length: 284 Keep-Alive: timeout=15, max=86 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"flavor": {"name": "swift-storage", "links": [{"href": "http://192.0.3.1:8774/v2.1/flavors/9149f7f2-27d8-46ba-b434-3115be9b3078", "rel": "self"}, {"href": "http://192.0.3.1:8774/flavors/9149f7f2-27d8-46ba-b434-3115be9b3078", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "9149f7f2-27d8-46ba-b434-3115be9b3078"}} > >2018-06-26 09:56:32,945 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/flavors used request id req-674a69dc-d965-45c0-b41f-73ed46291e9a >2018-06-26 09:56:32,947 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/flavors/9149f7f2-27d8-46ba-b434-3115be9b3078/os-extra_specs -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '{"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "swift-storage", "resources:MEMORY_MB": "0", "resources:VCPU": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:DISK_GB": "0"}}' >2018-06-26 09:56:32,981 DEBUG: http://192.0.3.1:8774 "POST /v2.1/flavors/9149f7f2-27d8-46ba-b434-3115be9b3078/os-extra_specs HTTP/1.1" 200 154 >2018-06-26 09:56:32,982 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 04:26:32 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-92c79902-0d5d-4e87-8734-f3f371374726 x-compute-request-id: req-92c79902-0d5d-4e87-8734-f3f371374726 Content-Encoding: gzip Content-Length: 154 Keep-Alive: timeout=15, max=85 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "swift-storage", "resources:MEMORY_MB": "0", "resources:DISK_GB": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:VCPU": "0"}} > >2018-06-26 09:56:32,982 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/flavors/9149f7f2-27d8-46ba-b434-3115be9b3078/os-extra_specs used request id req-92c79902-0d5d-4e87-8734-f3f371374726 >2018-06-26 09:56:32,982 INFO: Created flavor "swift-storage" with profile "swift-storage" >2018-06-26 09:56:32,982 INFO: Configuring Mistral workbooks >2018-06-26 09:56:32,983 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:56:32,984 DEBUG: Starting new HTTP connection (1): 192.0.3.1 >2018-06-26 09:56:33,628 DEBUG: http://192.0.3.1:8989 "GET /v2/workbooks HTTP/1.1" 200 17 >2018-06-26 09:56:33,628 DEBUG: RESP: [200] Content-Length: 17 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:33 GMT Connection: keep-alive >RESP BODY: {"workbooks": []} > >2018-06-26 09:56:33,629 DEBUG: HTTP GET http://192.0.3.1:8989/v2/workbooks 200 >2018-06-26 09:56:33,629 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/workflows -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:56:33,640 DEBUG: http://192.0.3.1:8989 "GET /v2/workflows HTTP/1.1" 200 3608 >2018-06-26 09:56:33,641 DEBUG: RESP: [200] Content-Length: 3608 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:33 GMT Connection: keep-alive >RESP BODY: {"workflows": [{"definition": "---\nversion: '2.0'\n\nstd.create_instance:\n type: direct\n\n description: |\n Creates VM and waits till VM OS is up and running.\n\n input:\n - name\n - image_id\n - flavor_id\n - ssh_username: null\n - ssh_password: null\n\n # Name of previously created keypair to inject into the instance.\n # Either ssh credentials or keypair must be provided.\n - key_name: null\n\n # Security_groups: A list of security group names\n - security_groups: null\n\n # An ordered list of nics to be added to this server, with information about connected networks, fixed IPs, port etc.\n # Example: nics: [{\"net-id\": \"27aa8c1c-d6b8-4474-b7f7-6cdcf63ac856\"}]\n - nics: null\n\n task-defaults:\n on-error:\n - delete_vm\n\n output:\n ip: <% $.vm_ip %>\n id: <% $.vm_id %>\n name: <% $.name %>\n status: <% $.status %>\n\n tasks:\n create_vm:\n description: Initial request to create a VM.\n action: nova.servers_create name=<% $.name %> image=<% $.image_id %> flavor=<% $.flavor_id %>\n input:\n key_name: <% $.key_name %>\n security_groups: <% $.security_groups %>\n nics: <% $.nics %>\n publish:\n vm_id: <% task(create_vm).result.id %>\n on-success:\n - search_for_ip\n\n search_for_ip:\n description: Gets first free ip from Nova floating IPs.\n action: nova.floating_ips_findall instance_id=null\n publish:\n vm_ip: <% task(search_for_ip).result[0].ip %>\n on-success:\n - wait_vm_active\n\n wait_vm_active:\n description: Waits till VM is ACTIVE.\n action: nova.servers_find id=<% $.vm_id %> status=\"ACTIVE\"\n retry:\n count: 10\n delay: 10\n publish:\n status: <% task(wait_vm_active).result.status %>\n on-success:\n - associate_ip\n\n associate_ip:\n description: Associate server with one of floating IPs.\n action: nova.servers_add_floating_ip server=<% $.vm_id %> address=<% $.vm_ip %>\n wait-after: 5\n on-success:\n - wait_ssh\n\n wait_ssh:\n description: Wait till operating system on the VM is up (SSH command).\n action: std.wait_ssh username=<% $.ssh_username %> password=<% $.ssh_password %> host=<% $.vm_ip %>\n retry:\n count: 10\n delay: 10\n\n delete_vm:\n description: Destroy VM.\n workflow: std.delete_instance instance_id=<% $.vm_id %>\n on-complete:\n - fail\n", "name": "std.create_instance", "tags": [], "created_at": "2018-06-26 04:25:00", "namespace": "", "updated_at": null, "scope": "public", "input": "name, image_id, flavor_id, ssh_username=None, ssh_password=None, key_name=None, security_groups=None, nics=None", "project_id": "<default-project>", "id": "979c1978-9c28-4006-a999-6caf1de2eca6"}, {"definition": "---\nversion: \"2.0\"\n\nstd.delete_instance:\n type: direct\n\n input:\n - instance_id\n\n description: Deletes VM.\n\n tasks:\n delete_vm:\n description: Destroy VM.\n action: nova.servers_delete server=<% $.instance_id %>\n wait-after: 10\n on-success:\n - find_given_vm\n\n find_given_vm:\n description: Checks that VM is already deleted.\n action: nova.servers_find id=<% $.instance_id %>\n on-error:\n - succeed\n\n", "name": "std.delete_instance", "tags": [], "created_at": "2018-06-26 04:25:00", "namespace": "", "updated_at": null, "scope": "public", "input": "instance_id", "project_id": "<default-project>", "id": "dd5e4509-fe27-4df5-82c2-6557eb8857c8"}]} > >2018-06-26 09:56:33,641 DEBUG: HTTP GET http://192.0.3.1:8989/v2/workflows 200 >2018-06-26 09:56:33,641 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/cron_triggers -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:56:33,653 DEBUG: http://192.0.3.1:8989 "GET /v2/cron_triggers HTTP/1.1" 200 21 >2018-06-26 09:56:33,653 DEBUG: RESP: [200] Content-Length: 21 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:33 GMT Connection: keep-alive >RESP BODY: {"cron_triggers": []} > >2018-06-26 09:56:33,653 DEBUG: HTTP GET http://192.0.3.1:8989/v2/cron_triggers 200 >2018-06-26 09:56:33,672 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.access.v1 >description: TripleO administration access workflows > >workflows: > > enable_ssh_admin: > description: >- > This workflow creates an admin user on the overcloud nodes, > which can then be used for connecting for automated > administrative or deployment tasks, e.g. via Ansible. The > workflow can be used both for Nova-managed and split-stack > deployments, assuming the correct input values are passed > in. The workflow defaults to Nova-managed approach, for which no > additional parameters need to be supplied. In case of > split-stack, temporary ssh connection details (user, key, list > of servers) need to be provided -- these are only used > temporarily to create the actual ssh admin user for use by > Mistral. > tags: > - tripleo-common-managed > input: > - ssh_private_key: null > - ssh_user: null > - ssh_servers: [] > - overcloud_admin: tripleo-admin > - queue_name: tripleo > tasks: > get_pubkey: > action: tripleo.validations.get_pubkey > on-success: generate_playbook > publish: > pubkey: <% task().result %> > > generate_playbook: > on-success: > - create_admin_via_nova: <% $.ssh_private_key = null %> > - create_admin_via_ssh: <% $.ssh_private_key != null %> > publish: > create_admin_tasks: > - name: create user <% $.overcloud_admin %> > user: > name: '<% $.overcloud_admin %>' > - name: grant admin rights to user <% $.overcloud_admin %> > copy: > dest: /etc/sudoers.d/<% $.overcloud_admin %> > content: | > <% $.overcloud_admin %> ALL=(ALL) NOPASSWD:ALL > mode: 0440 > - name: ensure .ssh dir exists for user <% $.overcloud_admin %> > file: > path: /home/<% $.overcloud_admin %>/.ssh > state: directory > owner: <% $.overcloud_admin %> > group: <% $.overcloud_admin %> > mode: 0700 > - name: ensure authorized_keys file exists for user <% $.overcloud_admin %> > file: > path: /home/<% $.overcloud_admin %>/.ssh/authorized_keys > state: touch > owner: <% $.overcloud_admin %> > group: <% $.overcloud_admin %> > mode: 0700 > - name: authorize TripleO Mistral key for user <% $.overcloud_admin %> > lineinfile: > path: /home/<% $.overcloud_admin %>/.ssh/authorized_keys > line: <% $.pubkey %> > regexp: "Generated by TripleO" > > # Nova variant > create_admin_via_nova: > workflow: tripleo.access.v1.create_admin_via_nova > input: > queue_name: <% $.queue_name %> > ssh_servers: <% $.ssh_servers %> > tasks: <% $.create_admin_tasks %> > overcloud_admin: <% $.overcloud_admin %> > > # SSH variant > create_admin_via_ssh: > workflow: tripleo.access.v1.create_admin_via_ssh > input: > ssh_private_key: <% $.ssh_private_key %> > ssh_user: <% $.ssh_user %> > ssh_servers: <% $.ssh_servers %> > tasks: <% $.create_admin_tasks %> > > create_admin_via_nova: > input: > - tasks > - queue_name: tripleo > - ssh_servers: [] > - overcloud_admin: tripleo-admin > - ansible_extra_env_variables: > ANSIBLE_HOST_KEY_CHECKING: 'False' > tags: > - tripleo-common-managed > tasks: > get_servers: > action: nova.servers_list > on-success: create_admin > publish: > servers: <% let(root => $) -> task().result._info.where($.addresses.ctlplane.addr.any($ in $root.ssh_servers)) %> > > create_admin: > workflow: tripleo.deployment.v1.deploy_on_server > on-success: get_privkey > with-items: server in <% $.servers %> > input: > server_name: <% $.server.name %> > server_uuid: <% $.server.id %> > queue_name: <% $.queue_name %> > config_name: create_admin > group: ansible > config: | > - hosts: localhost > connection: local > tasks: <% json_pp($.tasks) %> > > get_privkey: > action: tripleo.validations.get_privkey > on-success: wait_for_occ > publish: > privkey: <% task().result %> > > wait_for_occ: > action: tripleo.ansible-playbook > input: > inventory: > overcloud: > hosts: <% $.ssh_servers.toDict($, {}) %> > remote_user: <% $.overcloud_admin %> > ssh_private_key: <% $.privkey %> > extra_env_variables: <% $.ansible_extra_env_variables %> > playbook: > - hosts: overcloud > gather_facts: no > tasks: > - name: wait for connection > wait_for_connection: > sleep: 5 > timeout: 300 > > create_admin_via_ssh: > input: > - tasks > - ssh_private_key > - ssh_user > - ssh_servers > - ansible_extra_env_variables: > ANSIBLE_HOST_KEY_CHECKING: 'False' > > tags: > - tripleo-common-managed > tasks: > write_tmp_playbook: > action: tripleo.ansible-playbook > input: > inventory: > overcloud: > hosts: <% $.ssh_servers.toDict($, {}) %> > remote_user: <% $.ssh_user %> > ssh_private_key: <% $.ssh_private_key %> > extra_env_variables: <% $.ansible_extra_env_variables %> > become: true > become_user: root > playbook: > - hosts: overcloud > tasks: <% $.tasks %> >' >2018-06-26 09:56:33,863 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 6130 >2018-06-26 09:56:33,864 DEBUG: RESP: [201] Content-Length: 6130 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:33 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.access.v1\ndescription: TripleO administration access workflows\n\nworkflows:\n\n enable_ssh_admin:\n description: >-\n This workflow creates an admin user on the overcloud nodes,\n which can then be used for connecting for automated\n administrative or deployment tasks, e.g. via Ansible. The\n workflow can be used both for Nova-managed and split-stack\n deployments, assuming the correct input values are passed\n in. The workflow defaults to Nova-managed approach, for which no\n additional parameters need to be supplied. In case of\n split-stack, temporary ssh connection details (user, key, list\n of servers) need to be provided -- these are only used\n temporarily to create the actual ssh admin user for use by\n Mistral.\n tags:\n - tripleo-common-managed\n input:\n - ssh_private_key: null\n - ssh_user: null\n - ssh_servers: []\n - overcloud_admin: tripleo-admin\n - queue_name: tripleo\n tasks:\n get_pubkey:\n action: tripleo.validations.get_pubkey\n on-success: generate_playbook\n publish:\n pubkey: <% task().result %>\n\n generate_playbook:\n on-success:\n - create_admin_via_nova: <% $.ssh_private_key = null %>\n - create_admin_via_ssh: <% $.ssh_private_key != null %>\n publish:\n create_admin_tasks:\n - name: create user <% $.overcloud_admin %>\n user:\n name: '<% $.overcloud_admin %>'\n - name: grant admin rights to user <% $.overcloud_admin %>\n copy:\n dest: /etc/sudoers.d/<% $.overcloud_admin %>\n content: |\n <% $.overcloud_admin %> ALL=(ALL) NOPASSWD:ALL\n mode: 0440\n - name: ensure .ssh dir exists for user <% $.overcloud_admin %>\n file:\n path: /home/<% $.overcloud_admin %>/.ssh\n state: directory\n owner: <% $.overcloud_admin %>\n group: <% $.overcloud_admin %>\n mode: 0700\n - name: ensure authorized_keys file exists for user <% $.overcloud_admin %>\n file:\n path: /home/<% $.overcloud_admin %>/.ssh/authorized_keys\n state: touch\n owner: <% $.overcloud_admin %>\n group: <% $.overcloud_admin %>\n mode: 0700\n - name: authorize TripleO Mistral key for user <% $.overcloud_admin %>\n lineinfile:\n path: /home/<% $.overcloud_admin %>/.ssh/authorized_keys\n line: <% $.pubkey %>\n regexp: \"Generated by TripleO\"\n\n # Nova variant\n create_admin_via_nova:\n workflow: tripleo.access.v1.create_admin_via_nova\n input:\n queue_name: <% $.queue_name %>\n ssh_servers: <% $.ssh_servers %>\n tasks: <% $.create_admin_tasks %>\n overcloud_admin: <% $.overcloud_admin %>\n\n # SSH variant\n create_admin_via_ssh:\n workflow: tripleo.access.v1.create_admin_via_ssh\n input:\n ssh_private_key: <% $.ssh_private_key %>\n ssh_user: <% $.ssh_user %>\n ssh_servers: <% $.ssh_servers %>\n tasks: <% $.create_admin_tasks %>\n\n create_admin_via_nova:\n input:\n - tasks\n - queue_name: tripleo\n - ssh_servers: []\n - overcloud_admin: tripleo-admin\n - ansible_extra_env_variables:\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n tags:\n - tripleo-common-managed\n tasks:\n get_servers:\n action: nova.servers_list\n on-success: create_admin\n publish:\n servers: <% let(root => $) -> task().result._info.where($.addresses.ctlplane.addr.any($ in $root.ssh_servers)) %>\n\n create_admin:\n workflow: tripleo.deployment.v1.deploy_on_server\n on-success: get_privkey\n with-items: server in <% $.servers %>\n input:\n server_name: <% $.server.name %>\n server_uuid: <% $.server.id %>\n queue_name: <% $.queue_name %>\n config_name: create_admin\n group: ansible\n config: |\n - hosts: localhost\n connection: local\n tasks: <% json_pp($.tasks) %>\n\n get_privkey:\n action: tripleo.validations.get_privkey\n on-success: wait_for_occ\n publish:\n privkey: <% task().result %>\n\n wait_for_occ:\n action: tripleo.ansible-playbook\n input:\n inventory:\n overcloud:\n hosts: <% $.ssh_servers.toDict($, {}) %>\n remote_user: <% $.overcloud_admin %>\n ssh_private_key: <% $.privkey %>\n extra_env_variables: <% $.ansible_extra_env_variables %>\n playbook:\n - hosts: overcloud\n gather_facts: no\n tasks:\n - name: wait for connection\n wait_for_connection:\n sleep: 5\n timeout: 300\n\n create_admin_via_ssh:\n input:\n - tasks\n - ssh_private_key\n - ssh_user\n - ssh_servers\n - ansible_extra_env_variables:\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n\n tags:\n - tripleo-common-managed\n tasks:\n write_tmp_playbook:\n action: tripleo.ansible-playbook\n input:\n inventory:\n overcloud:\n hosts: <% $.ssh_servers.toDict($, {}) %>\n remote_user: <% $.ssh_user %>\n ssh_private_key: <% $.ssh_private_key %>\n extra_env_variables: <% $.ansible_extra_env_variables %>\n become: true\n become_user: root\n playbook:\n - hosts: overcloud\n tasks: <% $.tasks %>\n", "name": "tripleo.access.v1", "tags": [], "created_at": "2018-06-26 04:26:33", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "291b24e1-7f5e-4738-92c0-8ef3a5201974"} > >2018-06-26 09:56:33,864 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:33,865 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.stack.v1 >description: TripleO Stack Workflows > >workflows: > > wait_for_stack_complete_or_failed: > input: > - stack > - timeout: 14400 # 4 hours. Default timeout of stack deployment > > tags: > - tripleo-common-managed > > tasks: > > wait_for_stack_status: > action: heat.stacks_get stack_id=<% $.stack %> > timeout: <% $.timeout %> > retry: > delay: 15 > count: <% $.timeout / 15 %> > continue-on: <% task().result.stack_status in ['CREATE_IN_PROGRESS', 'UPDATE_IN_PROGRESS', 'DELETE_IN_PROGRESS'] %> > > wait_for_stack_in_progress: > input: > - stack > - timeout: 600 # 10 minutes. Should not take much longer for a stack to transition to IN_PROGRESS > > tags: > - tripleo-common-managed > > tasks: > > wait_for_stack_status: > action: heat.stacks_get stack_id=<% $.stack %> > timeout: <% $.timeout %> > retry: > delay: 15 > count: <% $.timeout / 15 %> > continue-on: <% task().result.stack_status in ['CREATE_COMPLETE', 'CREATE_FAILED', 'UPDATE_COMPLETE', 'UPDATE_FAILED', 'DELETE_FAILED'] %> > > wait_for_stack_does_not_exist: > input: > - stack > - timeout: 3600 > > tags: > - tripleo-common-managed > > tasks: > wait_for_stack_does_not_exist: > action: heat.stacks_list > timeout: <% $.timeout %> > retry: > delay: 15 > count: <% $.timeout / 15 %> > continue-on: <% $.stack in task(wait_for_stack_does_not_exist).result.select([$.stack_name, $.id]).flatten() %> > > delete_stack: > input: > - stack > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > delete_the_stack: > action: heat.stacks_delete stack_id=<% $.stack %> > on-success: wait_for_stack_does_not_exist > on-error: delete_the_stack_failed > > delete_the_stack_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(delete_the_stack).result %> > > wait_for_stack_does_not_exist: > workflow: tripleo.stack.v1.wait_for_stack_does_not_exist stack=<% $.stack %> > on-success: send_message > on-error: wait_for_stack_does_not_exist_failed > > wait_for_stack_does_not_exist_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(wait_for_stack_does_not_exist).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.scale.v1.delete_stack > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 09:56:34,069 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 3236 >2018-06-26 09:56:34,070 DEBUG: RESP: [201] Content-Length: 3236 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:34 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.stack.v1\ndescription: TripleO Stack Workflows\n\nworkflows:\n\n wait_for_stack_complete_or_failed:\n input:\n - stack\n - timeout: 14400 # 4 hours. Default timeout of stack deployment\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n wait_for_stack_status:\n action: heat.stacks_get stack_id=<% $.stack %>\n timeout: <% $.timeout %>\n retry:\n delay: 15\n count: <% $.timeout / 15 %>\n continue-on: <% task().result.stack_status in ['CREATE_IN_PROGRESS', 'UPDATE_IN_PROGRESS', 'DELETE_IN_PROGRESS'] %>\n\n wait_for_stack_in_progress:\n input:\n - stack\n - timeout: 600 # 10 minutes. Should not take much longer for a stack to transition to IN_PROGRESS\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n wait_for_stack_status:\n action: heat.stacks_get stack_id=<% $.stack %>\n timeout: <% $.timeout %>\n retry:\n delay: 15\n count: <% $.timeout / 15 %>\n continue-on: <% task().result.stack_status in ['CREATE_COMPLETE', 'CREATE_FAILED', 'UPDATE_COMPLETE', 'UPDATE_FAILED', 'DELETE_FAILED'] %>\n\n wait_for_stack_does_not_exist:\n input:\n - stack\n - timeout: 3600\n\n tags:\n - tripleo-common-managed\n\n tasks:\n wait_for_stack_does_not_exist:\n action: heat.stacks_list\n timeout: <% $.timeout %>\n retry:\n delay: 15\n count: <% $.timeout / 15 %>\n continue-on: <% $.stack in task(wait_for_stack_does_not_exist).result.select([$.stack_name, $.id]).flatten() %>\n\n delete_stack:\n input:\n - stack\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n delete_the_stack:\n action: heat.stacks_delete stack_id=<% $.stack %>\n on-success: wait_for_stack_does_not_exist\n on-error: delete_the_stack_failed\n\n delete_the_stack_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(delete_the_stack).result %>\n\n wait_for_stack_does_not_exist:\n workflow: tripleo.stack.v1.wait_for_stack_does_not_exist stack=<% $.stack %>\n on-success: send_message\n on-error: wait_for_stack_does_not_exist_failed\n\n wait_for_stack_does_not_exist_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(wait_for_stack_does_not_exist).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.scale.v1.delete_stack\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.stack.v1", "tags": [], "created_at": "2018-06-26 04:26:34", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "6bbb5f09-3cbe-4b9c-bd26-9093251781c9"} > >2018-06-26 09:56:34,070 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:34,071 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.validations.v1 >description: TripleO Validations Workflows v1 > >workflows: > > run_validation: > input: > - validation_name > - plan: overcloud > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > notify_running: > on-complete: run_validation > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.run_validation > payload: > validation_name: <% $.validation_name %> > plan: <% $.plan %> > status: RUNNING > execution: <% execution() %> > > run_validation: > on-success: send_message > on-error: set_status_failed > action: tripleo.validations.run_validation validation=<% $.validation_name %> plan=<% $.plan %> > publish: > status: SUCCESS > stdout: <% task().result.stdout %> > stderr: <% task().result.stderr %> > > set_status_failed: > on-complete: send_message > publish: > status: FAILED > stdout: <% task(run_validation).result.stdout %> > stderr: <% task(run_validation).result.stderr %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.run_validation > payload: > validation_name: <% $.validation_name %> > plan: <% $.plan %> > status: <% $.get('status', 'SUCCESS') %> > stdout: <% $.stdout %> > stderr: <% $.stderr %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > run_validations: > input: > - validation_names: [] > - plan: overcloud > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > notify_running: > on-complete: run_validations > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.run_validations > payload: > validation_names: <% $.validation_names %> > plan: <% $.plan %> > status: RUNNING > execution: <% execution() %> > > run_validations: > on-success: send_message > on-error: set_status_failed > workflow: tripleo.validations.v1.run_validation validation_name=<% $.validation %> plan=<% $.plan %> queue_name=<% $.queue_name %> > with-items: validation in <% $.validation_names %> > publish: > status: SUCCESS > > set_status_failed: > on-complete: send_message > publish: > status: FAILED > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.run_validations > payload: > validation_names: <% $.validation_names %> > plan: <% $.plan %> > status: <% $.get('status', 'SUCCESS') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > run_groups: > input: > - group_names: [] > - plan: overcloud > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > find_validations: > on-success: notify_running > action: tripleo.validations.list_validations groups=<% $.group_names %> > publish: > validations: <% task().result %> > > notify_running: > on-complete: run_validation_group > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.run_validations > payload: > group_names: <% $.group_names %> > validation_names: <% $.validations.id %> > plan: <% $.plan %> > status: RUNNING > execution: <% execution() %> > > run_validation_group: > on-success: send_message > on-error: set_status_failed > workflow: tripleo.validations.v1.run_validation validation_name=<% $.validation %> plan=<% $.plan %> queue_name=<% $.queue_name %> > with-items: validation in <% $.validations.id %> > publish: > status: SUCCESS > > set_status_failed: > on-complete: send_message > publish: > status: FAILED > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.run_groups > payload: > group_names: <% $.group_names %> > validation_names: <% $.validations.id %> > plan: <% $.plan %> > status: <% $.get('status', 'SUCCESS') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > list: > input: > - group_names: [] > tags: > - tripleo-common-managed > tasks: > find_validations: > action: tripleo.validations.list_validations groups=<% $.group_names %> > > list_groups: > tags: > - tripleo-common-managed > tasks: > find_groups: > action: tripleo.validations.list_groups > > add_validation_ssh_key_parameter: > input: > - container > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > test_validations_enabled: > action: tripleo.validations.enabled > on-success: get_pubkey > on-error: unset_validation_key_parameter > > get_pubkey: > action: tripleo.validations.get_pubkey > on-success: set_validation_key_parameter > publish: > pubkey: <% task().result %> > > set_validation_key_parameter: > action: tripleo.parameters.update > input: > parameters: > node_admin_extra_ssh_keys: <% $.pubkey %> > container: <% $.container %> > > # NOTE(shadower): We need to clear keys from a previous deployment > unset_validation_key_parameter: > action: tripleo.parameters.update > input: > parameters: > node_admin_extra_ssh_keys: "" > container: <% $.container %> > > copy_ssh_key: > input: > # FIXME: we should stop using heat-admin as e.g. split-stack > # environments (where Nova didn't create overcloud nodes) don't > # have it present > - overcloud_admin: heat-admin > - queue_name: tripleo > tags: > - tripleo-common-managed > tasks: > get_servers: > action: nova.servers_list > on-success: get_pubkey > publish: > servers: <% task().result._info %> > > get_pubkey: > action: tripleo.validations.get_pubkey > on-success: deploy_ssh_key > publish: > pubkey: <% task().result %> > > deploy_ssh_key: > workflow: tripleo.deployment.v1.deploy_on_server > with-items: server in <% $.servers %> > input: > server_name: <% $.server.name %> > server_uuid: <% $.server.id %> > config: | > #!/bin/bash > if ! grep "<% $.pubkey %>" /home/<% $.overcloud_admin %>/.ssh/authorized_keys; then > echo "<% $.pubkey %>" >> /home/<% $.overcloud_admin %>/.ssh/authorized_keys > fi > config_name: copy_ssh_key > group: script > queue_name: <% $.queue_name %> > > check_boot_images: > input: > - deploy_kernel_name: 'bm-deploy-kernel' > - deploy_ramdisk_name: 'bm-deploy-ramdisk' > - run_validations: true > - queue_name: tripleo > output: > errors: <% $.errors %> > warnings: <% $.warnings %> > kernel_id: <% $.kernel_id %> > ramdisk_id: <% $.ramdisk_id %> > tags: > - tripleo-common-managed > tasks: > check_run_validations: > on-complete: > - get_images: <% $.run_validations %> > - send_message: <% not $.run_validations %> > > get_images: > action: glance.images_list > on-success: check_images > publish: > images: <% task().result %> > > check_images: > action: tripleo.validations.check_boot_images > input: > images: <% $.images %> > deploy_kernel_name: <% $.deploy_kernel_name %> > deploy_ramdisk_name: <% $.deploy_ramdisk_name %> > on-success: send_message > publish: > kernel_id: <% task().result.kernel_id %> > ramdisk_id: <% task().result.ramdisk_id %> > warnings: <% task().result.warnings %> > errors: <% task().result.errors %> > on-error: send_message > publish-on-error: > kernel_id: <% task().result.kernel_id %> > ramdisk_id: <% task().result.ramdisk_id %> > warnings: <% task().result.warnings %> > errors: <% task().result.errors %> > status: FAILED > message: <% task().result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.check_boot_images > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > kernel_id: <% $.kernel_id %> > ramdisk_id: <% $.ramdisk_id %> > errors: <% $.errors %> > warnings: <% $.warnings %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > collect_flavors: > input: > - roles_info: {} > - run_validations: true > - queue_name: tripleo > output: > errors: <% $.errors %> > warnings: <% $.warnings %> > flavors: <% $.flavors %> > > tags: > - tripleo-common-managed > > tasks: > check_run_validations: > on-complete: > - check_flavors: <% $.run_validations %> > - send_message: <% not $.run_validations %> > > check_flavors: > action: tripleo.validations.check_flavors > input: > roles_info: <% $.roles_info %> > on-success: send_message > publish: > flavors: <% task().result.flavors %> > errors: <% task().result.errors %> > warnings: <% task().result.warnings %> > on-error: send_message > publish-on-error: > flavors: {} > errors: <% task().result.errors %> > warnings: <% task().result.warnings %> > status: FAILED > message: <% task().result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.collect_flavors > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > flavors: <% $.flavors %> > errors: <% $.errors %> > warnings: <% $.warnings %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > check_ironic_boot_configuration: > input: > - kernel_id: null > - ramdisk_id: null > - run_validations: true > - queue_name: tripleo > output: > errors: <% $.errors %> > warnings: <% $.warnings %> > > tags: > - tripleo-common-managed > > tasks: > check_run_validations: > on-complete: > - get_ironic_nodes: <% $.run_validations %> > - send_message: <% not $.run_validations %> > > get_ironic_nodes: > action: ironic.node_list > input: > provision_state: available > maintenance: false > detail: true > on-success: check_node_boot_configuration > publish: > nodes: <% task().result %> > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > check_node_boot_configuration: > action: tripleo.validations.check_node_boot_configuration > input: > node: <% $.node %> > kernel_id: <% $.kernel_id %> > ramdisk_id: <% $.ramdisk_id %> > with-items: node in <% $.nodes %> > on-success: send_message > publish: > errors: <% task().result.errors.flatten() %> > warnings: <% task().result.warnings.flatten() %> > on-error: send_message > publish-on-error: > errors: <% task().result.errors.flatten() %> > warnings: <% task().result.warnings.flatten() %> > status: FAILED > message: <% task().result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.check_ironic_boot_configuration > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > errors: <% $.errors %> > warnings: <% $.warnings %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > verify_profiles: > input: > - flavors: [] > - run_validations: true > - queue_name: tripleo > output: > errors: <% $.errors %> > warnings: <% $.warnings %> > > tags: > - tripleo-common-managed > > tasks: > check_run_validations: > on-complete: > - get_ironic_nodes: <% $.run_validations %> > - send_message: <% not $.run_validations %> > > get_ironic_nodes: > action: ironic.node_list > input: > maintenance: false > detail: true > on-success: verify_profiles > publish: > nodes: <% task().result %> > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > verify_profiles: > action: tripleo.validations.verify_profiles > input: > nodes: <% $.nodes %> > flavors: <% $.flavors %> > on-success: send_message > publish: > errors: <% task().result.errors %> > warnings: <% task().result.warnings %> > on-error: send_message > publish-on-error: > errors: <% task().result.errors %> > warnings: <% task().result.warnings %> > status: FAILED > message: <% task().result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.verify_profiles > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > errors: <% $.errors %> > warnings: <% $.warnings %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > check_default_nodes_count: > input: > - stack_id: overcloud > - parameters: {} > - default_role_counts: {} > - run_validations: true > - queue_name: tripleo > output: > statistics: <% $.statistics %> > errors: <% $.errors %> > warnings: <% $.warnings %> > > tags: > - tripleo-common-managed > > tasks: > check_run_validations: > on-complete: > - get_hypervisor_statistics: <% $.run_validations %> > - send_message: <% not $.run_validations %> > > get_hypervisor_statistics: > action: nova.hypervisors_statistics > on-success: get_stack > publish: > statistics: <% task().result %> > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > errors: [] > warnings: [] > statistics: null > > get_stack: > action: heat.stacks_get > input: > stack_id: <% $.stack_id %> > on-success: get_associated_nodes > publish: > stack: <% task().result %> > on-error: get_associated_nodes > publish-on-error: > stack: null > > get_associated_nodes: > action: ironic.node_list > input: > associated: true > on-success: get_available_nodes > publish: > associated_nodes: <% task().result %> > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > errors: [] > warnings: [] > > get_available_nodes: > action: ironic.node_list > input: > provision_state: available > associated: false > maintenance: false > on-success: check_nodes_count > publish: > available_nodes: <% task().result %> > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > errors: [] > warnings: [] > > check_nodes_count: > action: tripleo.validations.check_nodes_count > input: > statistics: <% $.statistics %> > stack: <% $.stack %> > associated_nodes: <% $.associated_nodes %> > available_nodes: <% $.available_nodes %> > parameters: <% $.parameters %> > default_role_counts: <% $.default_role_counts %> > on-success: send_message > publish: > errors: <% task().result.errors %> > warnings: <% task().result.warnings %> > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > statistics: null > errors: <% task().result.errors %> > warnings: <% task().result.warnings %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.check_hypervisor_stats > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > statistics: <% $.statistics %> > errors: <% $.errors %> > warnings: <% $.warnings %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > check_pre_deployment_validations: > input: > - deploy_kernel_name: 'bm-deploy-kernel' > - deploy_ramdisk_name: 'bm-deploy-ramdisk' > - roles_info: {} > - stack_id: overcloud > - parameters: {} > - default_role_counts: {} > - run_validations: true > - queue_name: tripleo > > output: > errors: <% $.errors %> > warnings: <% $.warnings %> > kernel_id: <% $.kernel_id %> > ramdisk_id: <% $.ramdisk_id %> > flavors: <% $.flavors %> > statistics: <% $.statistics %> > tags: > - tripleo-common-managed > tasks: > init_messages: > on-success: check_boot_images > publish: > errors: [] > warnings: [] > > check_boot_images: > workflow: check_boot_images > input: > deploy_kernel_name: <% $.deploy_kernel_name %> > deploy_ramdisk_name: <% $.deploy_ramdisk_name %> > run_validations: <% $.run_validations %> > queue_name: <% $.queue_name %> > publish: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > kernel_id: <% task().result.get('kernel_id') %> > ramdisk_id: <% task().result.get('ramdisk_id') %> > publish-on-error: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > kernel_id: <% task().result.get('kernel_id') %> > ramdisk_id: <% task().result.get('ramdisk_id') %> > status: FAILED > on-success: collect_flavors > on-error: collect_flavors > > collect_flavors: > workflow: collect_flavors > input: > roles_info: <% $.roles_info %> > run_validations: <% $.run_validations %> > queue_name: <% $.queue_name %> > publish: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > flavors: <% task().result.get('flavors') %> > publish-on-error: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > flavors: <% task().result.get('flavors') %> > status: FAILED > on-success: check_ironic_boot_configuration > on-error: check_ironic_boot_configuration > > check_ironic_boot_configuration: > workflow: check_ironic_boot_configuration > input: > kernel_id: <% $.kernel_id %> > ramdisk_id: <% $.ramdisk_id %> > run_validations: <% $.run_validations %> > queue_name: <% $.queue_name %> > publish: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > publish-on-error: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > status: FAILED > on-success: check_default_nodes_count > on-error: check_default_nodes_count > > check_default_nodes_count: > workflow: check_default_nodes_count > # ironic-nova sync happens once in two minutes > retry: count=12 delay=10 > input: > stack_id: <% $.stack_id %> > parameters: <% $.parameters %> > default_role_counts: <% $.default_role_counts %> > run_validations: <% $.run_validations %> > queue_name: <% $.queue_name %> > publish: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > statistics: <% task().result.get('statistics') %> > publish-on-error: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > statistics: <% task().result.get('statistics') %> > status: FAILED > on-success: verify_profiles > # Do not confuse user with info about profiles if the nodes > # count is off in the first place. Skip directly to > # send_message. (bug 1703942) > on-error: send_message > > verify_profiles: > workflow: verify_profiles > input: > flavors: <% $.flavors %> > run_validations: <% $.run_validations %> > queue_name: <% $.queue_name %> > publish: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > publish-on-error: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > status: FAILED > on-success: send_message > on-error: send_message > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.check_hypervisor_stats > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > kernel_id: <% $.kernel_id %> > ramdisk_id: <% $.ramdisk_id %> > flavors: <% $.flavors %> > statistics: <% $.statistics %> > errors: <% $.errors %> > warnings: <% $.warnings %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 09:56:35,481 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 25434 >2018-06-26 09:56:35,521 DEBUG: RESP: [201] Content-Length: 25434 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:35 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.validations.v1\ndescription: TripleO Validations Workflows v1\n\nworkflows:\n\n run_validation:\n input:\n - validation_name\n - plan: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n notify_running:\n on-complete: run_validation\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validation\n payload:\n validation_name: <% $.validation_name %>\n plan: <% $.plan %>\n status: RUNNING\n execution: <% execution() %>\n\n run_validation:\n on-success: send_message\n on-error: set_status_failed\n action: tripleo.validations.run_validation validation=<% $.validation_name %> plan=<% $.plan %>\n publish:\n status: SUCCESS\n stdout: <% task().result.stdout %>\n stderr: <% task().result.stderr %>\n\n set_status_failed:\n on-complete: send_message\n publish:\n status: FAILED\n stdout: <% task(run_validation).result.stdout %>\n stderr: <% task(run_validation).result.stderr %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validation\n payload:\n validation_name: <% $.validation_name %>\n plan: <% $.plan %>\n status: <% $.get('status', 'SUCCESS') %>\n stdout: <% $.stdout %>\n stderr: <% $.stderr %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n run_validations:\n input:\n - validation_names: []\n - plan: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n notify_running:\n on-complete: run_validations\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validations\n payload:\n validation_names: <% $.validation_names %>\n plan: <% $.plan %>\n status: RUNNING\n execution: <% execution() %>\n\n run_validations:\n on-success: send_message\n on-error: set_status_failed\n workflow: tripleo.validations.v1.run_validation validation_name=<% $.validation %> plan=<% $.plan %> queue_name=<% $.queue_name %>\n with-items: validation in <% $.validation_names %>\n publish:\n status: SUCCESS\n\n set_status_failed:\n on-complete: send_message\n publish:\n status: FAILED\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validations\n payload:\n validation_names: <% $.validation_names %>\n plan: <% $.plan %>\n status: <% $.get('status', 'SUCCESS') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n run_groups:\n input:\n - group_names: []\n - plan: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n find_validations:\n on-success: notify_running\n action: tripleo.validations.list_validations groups=<% $.group_names %>\n publish:\n validations: <% task().result %>\n\n notify_running:\n on-complete: run_validation_group\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validations\n payload:\n group_names: <% $.group_names %>\n validation_names: <% $.validations.id %>\n plan: <% $.plan %>\n status: RUNNING\n execution: <% execution() %>\n\n run_validation_group:\n on-success: send_message\n on-error: set_status_failed\n workflow: tripleo.validations.v1.run_validation validation_name=<% $.validation %> plan=<% $.plan %> queue_name=<% $.queue_name %>\n with-items: validation in <% $.validations.id %>\n publish:\n status: SUCCESS\n\n set_status_failed:\n on-complete: send_message\n publish:\n status: FAILED\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_groups\n payload:\n group_names: <% $.group_names %>\n validation_names: <% $.validations.id %>\n plan: <% $.plan %>\n status: <% $.get('status', 'SUCCESS') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n list:\n input:\n - group_names: []\n tags:\n - tripleo-common-managed\n tasks:\n find_validations:\n action: tripleo.validations.list_validations groups=<% $.group_names %>\n\n list_groups:\n tags:\n - tripleo-common-managed\n tasks:\n find_groups:\n action: tripleo.validations.list_groups\n\n add_validation_ssh_key_parameter:\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n test_validations_enabled:\n action: tripleo.validations.enabled\n on-success: get_pubkey\n on-error: unset_validation_key_parameter\n\n get_pubkey:\n action: tripleo.validations.get_pubkey\n on-success: set_validation_key_parameter\n publish:\n pubkey: <% task().result %>\n\n set_validation_key_parameter:\n action: tripleo.parameters.update\n input:\n parameters:\n node_admin_extra_ssh_keys: <% $.pubkey %>\n container: <% $.container %>\n\n # NOTE(shadower): We need to clear keys from a previous deployment\n unset_validation_key_parameter:\n action: tripleo.parameters.update\n input:\n parameters:\n node_admin_extra_ssh_keys: \"\"\n container: <% $.container %>\n\n copy_ssh_key:\n input:\n # FIXME: we should stop using heat-admin as e.g. split-stack\n # environments (where Nova didn't create overcloud nodes) don't\n # have it present\n - overcloud_admin: heat-admin\n - queue_name: tripleo\n tags:\n - tripleo-common-managed\n tasks:\n get_servers:\n action: nova.servers_list\n on-success: get_pubkey\n publish:\n servers: <% task().result._info %>\n\n get_pubkey:\n action: tripleo.validations.get_pubkey\n on-success: deploy_ssh_key\n publish:\n pubkey: <% task().result %>\n\n deploy_ssh_key:\n workflow: tripleo.deployment.v1.deploy_on_server\n with-items: server in <% $.servers %>\n input:\n server_name: <% $.server.name %>\n server_uuid: <% $.server.id %>\n config: |\n #!/bin/bash\n if ! grep \"<% $.pubkey %>\" /home/<% $.overcloud_admin %>/.ssh/authorized_keys; then\n echo \"<% $.pubkey %>\" >> /home/<% $.overcloud_admin %>/.ssh/authorized_keys\n fi\n config_name: copy_ssh_key\n group: script\n queue_name: <% $.queue_name %>\n\n check_boot_images:\n input:\n - deploy_kernel_name: 'bm-deploy-kernel'\n - deploy_ramdisk_name: 'bm-deploy-ramdisk'\n - run_validations: true\n - queue_name: tripleo\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n tags:\n - tripleo-common-managed\n tasks:\n check_run_validations:\n on-complete:\n - get_images: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n get_images:\n action: glance.images_list\n on-success: check_images\n publish:\n images: <% task().result %>\n\n check_images:\n action: tripleo.validations.check_boot_images\n input:\n images: <% $.images %>\n deploy_kernel_name: <% $.deploy_kernel_name %>\n deploy_ramdisk_name: <% $.deploy_ramdisk_name %>\n on-success: send_message\n publish:\n kernel_id: <% task().result.kernel_id %>\n ramdisk_id: <% task().result.ramdisk_id %>\n warnings: <% task().result.warnings %>\n errors: <% task().result.errors %>\n on-error: send_message\n publish-on-error:\n kernel_id: <% task().result.kernel_id %>\n ramdisk_id: <% task().result.ramdisk_id %>\n warnings: <% task().result.warnings %>\n errors: <% task().result.errors %>\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.check_boot_images\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n collect_flavors:\n input:\n - roles_info: {}\n - run_validations: true\n - queue_name: tripleo\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n flavors: <% $.flavors %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_run_validations:\n on-complete:\n - check_flavors: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n check_flavors:\n action: tripleo.validations.check_flavors\n input:\n roles_info: <% $.roles_info %>\n on-success: send_message\n publish:\n flavors: <% task().result.flavors %>\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n on-error: send_message\n publish-on-error:\n flavors: {}\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.collect_flavors\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n flavors: <% $.flavors %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n check_ironic_boot_configuration:\n input:\n - kernel_id: null\n - ramdisk_id: null\n - run_validations: true\n - queue_name: tripleo\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_run_validations:\n on-complete:\n - get_ironic_nodes: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n get_ironic_nodes:\n action: ironic.node_list\n input:\n provision_state: available\n maintenance: false\n detail: true\n on-success: check_node_boot_configuration\n publish:\n nodes: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n check_node_boot_configuration:\n action: tripleo.validations.check_node_boot_configuration\n input:\n node: <% $.node %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n with-items: node in <% $.nodes %>\n on-success: send_message\n publish:\n errors: <% task().result.errors.flatten() %>\n warnings: <% task().result.warnings.flatten() %>\n on-error: send_message\n publish-on-error:\n errors: <% task().result.errors.flatten() %>\n warnings: <% task().result.warnings.flatten() %>\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.check_ironic_boot_configuration\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n verify_profiles:\n input:\n - flavors: []\n - run_validations: true\n - queue_name: tripleo\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_run_validations:\n on-complete:\n - get_ironic_nodes: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n get_ironic_nodes:\n action: ironic.node_list\n input:\n maintenance: false\n detail: true\n on-success: verify_profiles\n publish:\n nodes: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n verify_profiles:\n action: tripleo.validations.verify_profiles\n input:\n nodes: <% $.nodes %>\n flavors: <% $.flavors %>\n on-success: send_message\n publish:\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n on-error: send_message\n publish-on-error:\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.verify_profiles\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n check_default_nodes_count:\n input:\n - stack_id: overcloud\n - parameters: {}\n - default_role_counts: {}\n - run_validations: true\n - queue_name: tripleo\n output:\n statistics: <% $.statistics %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_run_validations:\n on-complete:\n - get_hypervisor_statistics: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n get_hypervisor_statistics:\n action: nova.hypervisors_statistics\n on-success: get_stack\n publish:\n statistics: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n errors: []\n warnings: []\n statistics: null\n\n get_stack:\n action: heat.stacks_get\n input:\n stack_id: <% $.stack_id %>\n on-success: get_associated_nodes\n publish:\n stack: <% task().result %>\n on-error: get_associated_nodes\n publish-on-error:\n stack: null\n\n get_associated_nodes:\n action: ironic.node_list\n input:\n associated: true\n on-success: get_available_nodes\n publish:\n associated_nodes: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n errors: []\n warnings: []\n\n get_available_nodes:\n action: ironic.node_list\n input:\n provision_state: available\n associated: false\n maintenance: false\n on-success: check_nodes_count\n publish:\n available_nodes: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n errors: []\n warnings: []\n\n check_nodes_count:\n action: tripleo.validations.check_nodes_count\n input:\n statistics: <% $.statistics %>\n stack: <% $.stack %>\n associated_nodes: <% $.associated_nodes %>\n available_nodes: <% $.available_nodes %>\n parameters: <% $.parameters %>\n default_role_counts: <% $.default_role_counts %>\n on-success: send_message\n publish:\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n statistics: null\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.check_hypervisor_stats\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n statistics: <% $.statistics %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n check_pre_deployment_validations:\n input:\n - deploy_kernel_name: 'bm-deploy-kernel'\n - deploy_ramdisk_name: 'bm-deploy-ramdisk'\n - roles_info: {}\n - stack_id: overcloud\n - parameters: {}\n - default_role_counts: {}\n - run_validations: true\n - queue_name: tripleo\n\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n flavors: <% $.flavors %>\n statistics: <% $.statistics %>\n tags:\n - tripleo-common-managed\n tasks:\n init_messages:\n on-success: check_boot_images\n publish:\n errors: []\n warnings: []\n\n check_boot_images:\n workflow: check_boot_images\n input:\n deploy_kernel_name: <% $.deploy_kernel_name %>\n deploy_ramdisk_name: <% $.deploy_ramdisk_name %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n kernel_id: <% task().result.get('kernel_id') %>\n ramdisk_id: <% task().result.get('ramdisk_id') %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n kernel_id: <% task().result.get('kernel_id') %>\n ramdisk_id: <% task().result.get('ramdisk_id') %>\n status: FAILED\n on-success: collect_flavors\n on-error: collect_flavors\n\n collect_flavors:\n workflow: collect_flavors\n input:\n roles_info: <% $.roles_info %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n flavors: <% task().result.get('flavors') %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n flavors: <% task().result.get('flavors') %>\n status: FAILED\n on-success: check_ironic_boot_configuration\n on-error: check_ironic_boot_configuration\n\n check_ironic_boot_configuration:\n workflow: check_ironic_boot_configuration\n input:\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n status: FAILED\n on-success: check_default_nodes_count\n on-error: check_default_nodes_count\n\n check_default_nodes_count:\n workflow: check_default_nodes_count\n # ironic-nova sync happens once in two minutes\n retry: count=12 delay=10\n input:\n stack_id: <% $.stack_id %>\n parameters: <% $.parameters %>\n default_role_counts: <% $.default_role_counts %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n statistics: <% task().result.get('statistics') %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n statistics: <% task().result.get('statistics') %>\n status: FAILED\n on-success: verify_profiles\n # Do not confuse user with info about profiles if the nodes\n # count is off in the first place. Skip directly to\n # send_message. (bug 1703942)\n on-error: send_message\n\n verify_profiles:\n workflow: verify_profiles\n input:\n flavors: <% $.flavors %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n status: FAILED\n on-success: send_message\n on-error: send_message\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.check_hypervisor_stats\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n flavors: <% $.flavors %>\n statistics: <% $.statistics %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.validations.v1", "tags": [], "created_at": "2018-06-26 04:26:35", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "642e8c82-6af7-4f7b-bb31-061b92e88e25"} > >2018-06-26 09:56:35,521 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:35,523 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.derive_params_formulas.v1 >description: TripleO Workflows to derive deployment parameters from the introspected data > >workflows: > > > dpdk_derive_params: > description: > > Workflow to derive parameters for DPDK service. > input: > - plan > - role_name > - hw_data # introspection data > - user_inputs > - derived_parameters: {} > > output: > derived_parameters: <% $.derived_parameters.mergeWith($.get('dpdk_parameters', {})) %> > > tags: > - tripleo-common-managed > > tasks: > get_network_config: > action: tripleo.parameters.get_network_config > input: > container: <% $.plan %> > role_name: <% $.role_name %> > publish: > network_configs: <% task().result.get('network_config', []) %> > on-success: get_dpdk_nics_numa_info > on-error: set_status_failed_get_network_config > > get_dpdk_nics_numa_info: > action: tripleo.derive_params.get_dpdk_nics_numa_info > input: > network_configs: <% $.network_configs %> > inspect_data: <% $.hw_data %> > publish: > dpdk_nics_numa_info: <% task().result %> > on-success: > # TODO: Need to remove condtions here > # adding condition and throw error in action for empty check > - get_dpdk_nics_numa_nodes: <% $.dpdk_nics_numa_info %> > - set_status_failed_get_dpdk_nics_numa_info: <% not $.dpdk_nics_numa_info %> > on-error: set_status_failed_on_error_get_dpdk_nics_numa_info > > get_dpdk_nics_numa_nodes: > publish: > dpdk_nics_numa_nodes: <% $.dpdk_nics_numa_info.groupBy($.numa_node).select($[0]).orderBy($) %> > on-success: > - get_numa_nodes: <% $.dpdk_nics_numa_nodes %> > - set_status_failed_get_dpdk_nics_numa_nodes: <% not $.dpdk_nics_numa_nodes %> > > get_numa_nodes: > publish: > numa_nodes: <% $.hw_data.numa_topology.ram.select($.numa_node).orderBy($) %> > on-success: > - get_num_phy_cores_per_numa_for_pmd: <% $.numa_nodes %> > - set_status_failed_get_numa_nodes: <% not $.numa_nodes %> > > get_num_phy_cores_per_numa_for_pmd: > publish: > num_phy_cores_per_numa_node_for_pmd: <% $.user_inputs.get('num_phy_cores_per_numa_node_for_pmd', 0) %> > on-success: > - get_num_cores_per_numa_nodes: <% isInteger($.num_phy_cores_per_numa_node_for_pmd) and $.num_phy_cores_per_numa_node_for_pmd > 0 %> > - set_status_failed_get_num_phy_cores_per_numa_for_pmd_invalid: <% not isInteger($.num_phy_cores_per_numa_node_for_pmd) %> > - set_status_failed_get_num_phy_cores_per_numa_for_pmd_not_provided: <% $.num_phy_cores_per_numa_node_for_pmd = 0 %> > > # For NUMA node with DPDK nic, number of cores should be used from user input > # For NUMA node without DPDK nic, number of cores should be 1 > get_num_cores_per_numa_nodes: > publish: > num_cores_per_numa_nodes: <% let(dpdk_nics_nodes => $.dpdk_nics_numa_nodes, cores => $.num_phy_cores_per_numa_node_for_pmd) -> $.numa_nodes.select(switch($ in $dpdk_nics_nodes => $cores, not $ in $dpdk_nics_nodes => 1)) %> > on-success: get_pmd_cpus > > get_pmd_cpus: > action: tripleo.derive_params.get_dpdk_core_list > input: > inspect_data: <% $.hw_data %> > numa_nodes_cores_count: <% $.num_cores_per_numa_nodes %> > publish: > pmd_cpus: <% task().result %> > on-success: > - get_pmd_cpus_range_list: <% $.pmd_cpus %> > - set_status_failed_get_pmd_cpus: <% not $.pmd_cpus %> > on-error: set_status_failed_on_error_get_pmd_cpus > > get_pmd_cpus_range_list: > action: tripleo.derive_params.convert_number_to_range_list > input: > num_list: <% $.pmd_cpus %> > publish: > pmd_cpus: <% task().result %> > on-success: get_host_cpus > on-error: set_status_failed_get_pmd_cpus_range_list > > get_host_cpus: > workflow: tripleo.derive_params_formulas.v1.get_host_cpus > input: > role_name: <% $.role_name %> > hw_data: <% $.hw_data %> > publish: > host_cpus: <% task().result.get('host_cpus', '') %> > on-success: get_sock_mem > on-error: set_status_failed_get_host_cpus > > get_sock_mem: > action: tripleo.derive_params.get_dpdk_socket_memory > input: > dpdk_nics_numa_info: <% $.dpdk_nics_numa_info %> > numa_nodes: <% $.numa_nodes %> > overhead: <% $.user_inputs.get('overhead', 800) %> > packet_size_in_buffer: <% 4096*64 %> > publish: > sock_mem: <% task().result %> > on-success: > - get_dpdk_parameters: <% $.sock_mem %> > - set_status_failed_get_sock_mem: <% not $.sock_mem %> > on-error: set_status_failed_on_error_get_sock_mem > > get_dpdk_parameters: > publish: > dpdk_parameters: <% dict(concat($.role_name, 'Parameters') => dict('OvsPmdCoreList' => $.get('pmd_cpus', ''), 'OvsDpdkCoreList' => $.get('host_cpus', ''), 'OvsDpdkSocketMemory' => $.get('sock_mem', ''))) %> > > set_status_failed_get_network_config: > publish: > status: FAILED > message: <% task(get_network_config).result %> > on-success: fail > > set_status_failed_get_dpdk_nics_numa_info: > publish: > status: FAILED > message: "Unable to determine DPDK NIC's NUMA information" > on-success: fail > > set_status_failed_on_error_get_dpdk_nics_numa_info: > publish: > status: FAILED > message: <% task(get_dpdk_nics_numa_info).result %> > on-success: fail > > set_status_failed_get_dpdk_nics_numa_nodes: > publish: > status: FAILED > message: "Unable to determine DPDK NIC's numa nodes" > on-success: fail > > set_status_failed_get_numa_nodes: > publish: > status: FAILED > message: 'Unable to determine available NUMA nodes' > on-success: fail > > set_status_failed_get_num_phy_cores_per_numa_for_pmd_invalid: > publish: > status: FAILED > message: <% "num_phy_cores_per_numa_node_for_pmd user input '{0}' is invalid".format($.num_phy_cores_per_numa_node_for_pmd) %> > on-success: fail > > set_status_failed_get_num_phy_cores_per_numa_for_pmd_not_provided: > publish: > status: FAILED > message: 'num_phy_cores_per_numa_node_for_pmd user input is not provided' > on-success: fail > > set_status_failed_get_pmd_cpus: > publish: > status: FAILED > message: 'Unable to determine OvsPmdCoreList parameter' > on-success: fail > > set_status_failed_on_error_get_pmd_cpus: > publish: > status: FAILED > message: <% task(get_pmd_cpus).result %> > on-success: fail > > set_status_failed_get_pmd_cpus_range_list: > publish: > status: FAILED > message: <% task(get_pmd_cpus_range_list).result %> > on-success: fail > > set_status_failed_get_host_cpus: > publish: > status: FAILED > message: <% task(get_host_cpus).result.get('message', '') %> > on-success: fail > > set_status_failed_get_sock_mem: > publish: > status: FAILED > message: 'Unable to determine OvsDpdkSocketMemory parameter' > on-success: fail > > set_status_failed_on_error_get_sock_mem: > publish: > status: FAILED > message: <% task(get_sock_mem).result %> > on-success: fail > > > sriov_derive_params: > description: > > This workflow derives parameters for the SRIOV feature. > > input: > - role_name > - hw_data # introspection data > - derived_parameters: {} > > output: > derived_parameters: <% $.derived_parameters.mergeWith($.get('sriov_parameters', {})) %> > > tags: > - tripleo-common-managed > > tasks: > get_host_cpus: > workflow: tripleo.derive_params_formulas.v1.get_host_cpus > input: > role_name: <% $.role_name %> > hw_data: <% $.hw_data %> > publish: > host_cpus: <% task().result.get('host_cpus', '') %> > on-success: get_sriov_parameters > on-error: set_status_failed_get_host_cpus > > get_sriov_parameters: > publish: > # SriovHostCpusList parameter is added temporarily and it's removed later from derived parameters result. > sriov_parameters: <% dict(concat($.role_name, 'Parameters') => dict('SriovHostCpusList' => $.get('host_cpus', ''))) %> > > set_status_failed_get_host_cpus: > publish: > status: FAILED > message: <% task(get_host_cpus).result.get('message', '') %> > on-success: fail > > > get_host_cpus: > description: > > Fetching the host CPU list from the introspection data, and then converting the raw list into a range list. > > input: > - hw_data # introspection data > > output: > host_cpus: <% $.get('host_cpus', '') %> > > tags: > - tripleo-common-managed > > tasks: > get_host_cpus: > action: tripleo.derive_params.get_host_cpus_list inspect_data=<% $.hw_data %> > publish: > host_cpus: <% task().result %> > on-success: > - get_host_cpus_range_list: <% $.host_cpus %> > - set_status_failed_get_host_cpus: <% not $.host_cpus %> > on-error: set_status_failed_on_error_get_host_cpus > > get_host_cpus_range_list: > action: tripleo.derive_params.convert_number_to_range_list > input: > num_list: <% $.host_cpus %> > publish: > host_cpus: <% task().result %> > on-error: set_status_failed_get_host_cpus_range_list > > set_status_failed_get_host_cpus: > publish: > status: FAILED > message: 'Unable to determine host cpus' > on-success: fail > > set_status_failed_on_error_get_host_cpus: > publish: > status: FAILED > message: <% task(get_host_cpus).result %> > on-success: fail > > set_status_failed_get_host_cpus_range_list: > publish: > status: FAILED > message: <% task(get_host_cpus_range_list).result %> > on-success: fail > > > host_derive_params: > description: > > This workflow derives parameters for the Host process, and is mainly associated with CPU pinning and huge memory pages. > This workflow can be dependent on any feature or also can be invoked individually as well. > > input: > - role_name > - hw_data # introspection data > - user_inputs > - derived_parameters: {} > > output: > derived_parameters: <% $.derived_parameters.mergeWith($.get('host_parameters', {})) %> > > tags: > - tripleo-common-managed > > tasks: > get_cpus: > publish: > cpus: <% $.hw_data.numa_topology.cpus %> > on-success: > - get_role_derive_params: <% $.cpus %> > - set_status_failed_get_cpus: <% not $.cpus %> > > get_role_derive_params: > publish: > role_derive_params: <% $.derived_parameters.get(concat($.role_name, 'Parameters'), {}) %> > # removing the role parameters (eg. ComputeParameters) in derived_parameters dictionary since already copied in role_derive_params. > derived_parameters: <% $.derived_parameters.delete(concat($.role_name, 'Parameters')) %> > on-success: get_host_cpus > > get_host_cpus: > publish: > host_cpus: <% $.role_derive_params.get('OvsDpdkCoreList', '') or $.role_derive_params.get('SriovHostCpusList', '') %> > # SriovHostCpusList parameter is added temporarily for host_cpus and not needed in derived_parameters result. > # SriovHostCpusList parameter is deleted in derived_parameters list and adding the updated role parameters > # back in the derived_parameters. > derived_parameters: <% $.derived_parameters + dict(concat($.role_name, 'Parameters') => $.role_derive_params.delete('SriovHostCpusList')) %> > on-success: get_host_dpdk_combined_cpus > > get_host_dpdk_combined_cpus: > publish: > host_dpdk_combined_cpus: <% let(pmd_cpus => $.role_derive_params.get('OvsPmdCoreList', '')) -> switch($pmd_cpus => concat($pmd_cpus, ',', $.host_cpus), not $pmd_cpus => $.host_cpus) %> > reserved_cpus: [] > on-success: > - get_host_dpdk_combined_cpus_num_list: <% $.host_dpdk_combined_cpus %> > - set_status_failed_get_host_dpdk_combined_cpus: <% not $.host_dpdk_combined_cpus %> > > get_host_dpdk_combined_cpus_num_list: > action: tripleo.derive_params.convert_range_to_number_list > input: > range_list: <% $.host_dpdk_combined_cpus %> > publish: > host_dpdk_combined_cpus: <% task().result %> > reserved_cpus: <% task().result.split(',') %> > on-success: get_nova_cpus > on-error: set_status_failed_get_host_dpdk_combined_cpus_num_list > > get_nova_cpus: > publish: > nova_cpus: <% let(reserved_cpus => $.reserved_cpus) -> $.cpus.select($.thread_siblings).flatten().where(not (str($) in $reserved_cpus)).join(',') %> > on-success: > - get_isol_cpus: <% $.nova_cpus %> > - set_status_failed_get_nova_cpus: <% not $.nova_cpus %> > > # concatinates OvsPmdCoreList range format and NovaVcpuPinSet in range format. it may not be in perfect range format. > # example: concatinates '12-15,19' and 16-18' ranges '12-15,19,16-18' > get_isol_cpus: > publish: > isol_cpus: <% let(pmd_cpus => $.role_derive_params.get('OvsPmdCoreList','')) -> switch($pmd_cpus => concat($pmd_cpus, ',', $.nova_cpus), not $pmd_cpus => $.nova_cpus) %> > on-success: get_isol_cpus_num_list > > # Gets the isol_cpus in the number list > # example: '12-15,19,16-18' into '12,13,14,15,16,17,18,19' > get_isol_cpus_num_list: > action: tripleo.derive_params.convert_range_to_number_list > input: > range_list: <% $.isol_cpus %> > publish: > isol_cpus: <% task().result %> > on-success: get_nova_cpus_range_list > on-error: set_status_failed_get_isol_cpus_num_list > > get_nova_cpus_range_list: > action: tripleo.derive_params.convert_number_to_range_list > input: > num_list: <% $.nova_cpus %> > publish: > nova_cpus: <% task().result %> > on-success: get_isol_cpus_range_list > on-error: set_status_failed_get_nova_cpus_range_list > > # converts number format isol_cpus into range format > # example: '12,13,14,15,16,17,18,19' into '12-19' > get_isol_cpus_range_list: > action: tripleo.derive_params.convert_number_to_range_list > input: > num_list: <% $.isol_cpus %> > publish: > isol_cpus: <% task().result %> > on-success: get_host_mem > on-error: set_status_failed_get_isol_cpus_range_list > > get_host_mem: > publish: > host_mem: <% $.user_inputs.get('host_mem_default', 4096) %> > on-success: check_default_hugepage_supported > > check_default_hugepage_supported: > publish: > default_hugepage_supported: <% $.hw_data.get('inventory', {}).get('cpu', {}).get('flags', []).contains('pdpe1gb') %> > on-success: > - get_total_memory: <% $.default_hugepage_supported %> > - set_status_failed_check_default_hugepage_supported: <% not $.default_hugepage_supported %> > > get_total_memory: > publish: > total_memory: <% $.hw_data.get('inventory', {}).get('memory', {}).get('physical_mb', 0) %> > on-success: > - get_hugepage_allocation_percentage: <% $.total_memory %> > - set_status_failed_get_total_memory: <% not $.total_memory %> > > get_hugepage_allocation_percentage: > publish: > huge_page_allocation_percentage: <% $.user_inputs.get('huge_page_allocation_percentage', 0) %> > on-success: > - get_hugepages: <% isInteger($.huge_page_allocation_percentage) and $.huge_page_allocation_percentage > 0 %> > - set_status_failed_get_hugepage_allocation_percentage_invalid: <% not isInteger($.huge_page_allocation_percentage) %> > - set_status_failed_get_hugepage_allocation_percentage_not_provided: <% $.huge_page_allocation_percentage = 0 %> > > get_hugepages: > publish: > hugepages: <% let(huge_page_perc => float($.huge_page_allocation_percentage)/100)-> int((($.total_memory/1024)-4) * $huge_page_perc) %> > on-success: > - get_cpu_model: <% $.hugepages %> > - set_status_failed_get_hugepages: <% not $.hugepages %> > > get_cpu_model: > publish: > intel_cpu_model: <% $.hw_data.get('inventory', {}).get('cpu', {}).get('model_name', '').startsWith('Intel') %> > on-success: get_iommu_info > > get_iommu_info: > publish: > iommu_info: <% switch($.intel_cpu_model => 'intel_iommu=on iommu=pt', not $.intel_cpu_model => '') %> > on-success: get_kernel_args > > get_kernel_args: > publish: > kernel_args: <% concat('default_hugepagesz=1GB hugepagesz=1G ', 'hugepages=', str($.hugepages), ' ', $.iommu_info, ' isolcpus=', $.isol_cpus) %> > on-success: get_host_parameters > > get_host_parameters: > publish: > host_parameters: <% dict(concat($.role_name, 'Parameters') => dict('NovaVcpuPinSet' => $.get('nova_cpus', ''), 'NovaReservedHostMemory' => $.get('host_mem', ''), 'KernelArgs' => $.get('kernel_args', ''), 'IsolCpusList' => $.get('isol_cpus', ''))) %> > > set_status_failed_get_cpus: > publish: > status: FAILED > message: "Unable to determine CPU's on NUMA nodes" > on-success: fail > > set_status_failed_get_host_dpdk_combined_cpus: > publish: > status: FAILED > message: 'Unable to combine host and dpdk cpus list' > on-success: fail > > set_status_failed_get_host_dpdk_combined_cpus_num_list: > publish: > status: FAILED > message: <% task(get_host_dpdk_combined_cpus_num_list).result %> > on-success: fail > > set_status_failed_get_nova_cpus: > publish: > status: FAILED > message: 'Unable to determine nova vcpu pin set' > on-success: fail > > set_status_failed_get_nova_cpus_range_list: > publish: > status: FAILED > message: <% task(get_nova_cpus_range_list).result %> > on-success: fail > > set_status_failed_get_isol_cpus_num_list: > publish: > status: FAILED > message: <% task(get_isol_cpus_num_list).result %> > on-success: fail > > set_status_failed_get_isol_cpus_range_list: > publish: > status: FAILED > message: <% task(get_isol_cpus_range_list).result %> > on-success: fail > > set_status_failed_check_default_hugepage_supported: > publish: > status: FAILED > message: 'default huge page size 1GB is not supported' > on-success: fail > > set_status_failed_get_total_memory: > publish: > status: FAILED > message: 'Unable to determine total memory' > on-success: fail > > set_status_failed_get_hugepage_allocation_percentage_invalid: > publish: > status: FAILED > message: <% "huge_page_allocation_percentage user input '{0}' is invalid".format($.huge_page_allocation_percentage) %> > on-success: fail > > set_status_failed_get_hugepage_allocation_percentage_not_provided: > publish: > status: FAILED > message: 'huge_page_allocation_percentage user input is not provided' > on-success: fail > > set_status_failed_get_hugepages: > publish: > status: FAILED > message: 'Unable to determine huge pages' > on-success: fail > > > hci_derive_params: > description: Derive the deployment parameters for HCI > input: > - role_name > - environment_parameters > - heat_resource_tree > - introspection_data > - user_inputs > - derived_parameters: {} > > output: > derived_parameters: <% $.derived_parameters.mergeWith($.get('hci_parameters', {})) %> > > tags: > - tripleo-common-managed > > tasks: > get_hci_inputs: > publish: > hci_profile: <% $.user_inputs.get('hci_profile', '') %> > hci_profile_config: <% $.user_inputs.get('hci_profile_config', {}) %> > MB_PER_GB: 1024 > on-success: > - get_average_guest_memory_size_in_mb: <% $.hci_profile and $.hci_profile_config.get($.hci_profile, {}) %> > - set_failed_invalid_hci_profile: <% $.hci_profile and not $.hci_profile_config.get($.hci_profile, {}) %> > # When no hci_profile is specified, the workflow terminates without deriving any HCI parameters. > > get_average_guest_memory_size_in_mb: > publish: > average_guest_memory_size_in_mb: <% $.hci_profile_config.get($.hci_profile, {}).get('average_guest_memory_size_in_mb', 0) %> > on-success: > - get_average_guest_cpu_utilization_percentage: <% isInteger($.average_guest_memory_size_in_mb) %> > - set_failed_invalid_average_guest_memory_size_in_mb: <% not isInteger($.average_guest_memory_size_in_mb) %> > > get_average_guest_cpu_utilization_percentage: > publish: > average_guest_cpu_utilization_percentage: <% $.hci_profile_config.get($.hci_profile, {}).get('average_guest_cpu_utilization_percentage', 0) %> > on-success: > - get_gb_overhead_per_guest: <% isInteger($.average_guest_cpu_utilization_percentage) %> > - set_failed_invalid_average_guest_cpu_utilization_percentage: <% not isInteger($.average_guest_cpu_utilization_percentage) %> > > get_gb_overhead_per_guest: > publish: > gb_overhead_per_guest: <% $.user_inputs.get('gb_overhead_per_guest', 0.5) %> > on-success: > - get_gb_per_osd: <% isNumber($.gb_overhead_per_guest) %> > - set_failed_invalid_gb_overhead_per_guest: <% not isNumber($.gb_overhead_per_guest) %> > > get_gb_per_osd: > publish: > gb_per_osd: <% $.user_inputs.get('gb_per_osd', 3) %> > on-success: > - get_cores_per_osd: <% isNumber($.gb_per_osd) %> > - set_failed_invalid_gb_per_osd: <% not isNumber($.gb_per_osd) %> > > get_cores_per_osd: > publish: > cores_per_osd: <% $.user_inputs.get('cores_per_osd', 1.0) %> > on-success: > - get_extra_configs: <% isNumber($.cores_per_osd) %> > - set_failed_invalid_cores_per_osd: <% not isNumber($.cores_per_osd) %> > > get_extra_configs: > publish: > extra_config: <% $.environment_parameters.get('ExtraConfig', {}) %> > role_extra_config: <% $.environment_parameters.get(concat($.role_name, 'ExtraConfig'), {}) %> > role_env_params: <% $.environment_parameters.get(concat($.role_name, 'Parameters'), {}) %> > role_derive_params: <% $.derived_parameters.get(concat($.role_name, 'Parameters'), {}) %> > on-success: get_num_osds > > get_num_osds: > publish: > num_osds: <% $.heat_resource_tree.parameters.get('CephAnsibleDisksConfig', {}).get('default', {}).get('devices', []).count() %> > on-success: > - get_memory_mb: <% $.num_osds %> > # If there's no CephAnsibleDisksConfig then look for OSD configuration in hiera data > - get_num_osds_from_hiera: <% not $.num_osds %> > > get_num_osds_from_hiera: > publish: > num_osds: <% $.role_extra_config.get('ceph::profile::params::osds', $.extra_config.get('ceph::profile::params::osds', {})).keys().count() %> > on-success: > - get_memory_mb: <% $.num_osds %> > - set_failed_no_osds: <% not $.num_osds %> > > get_memory_mb: > publish: > memory_mb: <% $.introspection_data.get('memory_mb', 0) %> > on-success: > - get_nova_vcpu_pin_set: <% $.memory_mb %> > - set_failed_get_memory_mb: <% not $.memory_mb %> > > # Determine the number of CPU cores available to Nova and Ceph. If > # NovaVcpuPinSet is defined then use the number of vCPUs in the set, > # otherwise use all of the cores identified in the introspection data. > > get_nova_vcpu_pin_set: > publish: > # NovaVcpuPinSet can be defined in multiple locations, and it's > # important to select the value in order of precedence: > # > # 1) User specified value for this role > # 2) User specified default value for all roles > # 3) Value derived by another derived parameters workflow > nova_vcpu_pin_set: <% $.role_env_params.get('NovaVcpuPinSet', $.environment_parameters.get('NovaVcpuPinSet', $.role_derive_params.get('NovaVcpuPinSet', ''))) %> > on-success: > - get_nova_vcpu_count: <% $.nova_vcpu_pin_set %> > - get_num_cores: <% not $.nova_vcpu_pin_set %> > > get_nova_vcpu_count: > action: tripleo.derive_params.convert_range_to_number_list > input: > range_list: <% $.nova_vcpu_pin_set %> > publish: > num_cores: <% task().result.split(',').count() %> > on-success: calculate_nova_parameters > on-error: set_failed_get_nova_vcpu_count > > get_num_cores: > publish: > num_cores: <% $.introspection_data.get('cpus', 0) %> > on-success: > - calculate_nova_parameters: <% $.num_cores %> > - set_failed_get_num_cores: <% not $.num_cores %> > > # HCI calculations are broken into multiple steps. This is necessary > # because variables published by a Mistral task are not available > # for use by that same task. Variables computed and published in a task > # are only available in subsequent tasks. > # > # The HCI calculations compute two Nova parameters: > # - reserved_host_memory > # - cpu_allocation_ratio > # > # The reserved_host_memory calculation computes the amount of memory > # that needs to be reserved for Ceph and the total amount of "guest > # overhead" memory that is based on the anticipated number of guests. > # Psuedo-code for the calculation (disregarding MB and GB units) is > # as follows: > # > # ceph_memory = mem_per_osd * num_osds > # nova_memory = total_memory - ceph_memory > # num_guests = nova_memory / > # (average_guest_memory_size + overhead_per_guest) > # reserved_memory = ceph_memory + (num_guests * overhead_per_guest) > # > # The cpu_allocation_ratio calculation is similar in that it takes into > # account the number of cores that must be reserved for Ceph. > # > # ceph_cores = cores_per_osd * num_osds > # guest_cores = num_cores - ceph_cores > # guest_vcpus = guest_cores / average_guest_utilization > # cpu_allocation_ratio = guest_vcpus / num_cores > > calculate_nova_parameters: > publish: > avg_guest_util: <% $.average_guest_cpu_utilization_percentage / 100.0 %> > avg_guest_size_gb: <% $.average_guest_memory_size_in_mb / float($.MB_PER_GB) %> > memory_gb: <% $.memory_mb / float($.MB_PER_GB) %> > ceph_mem_gb: <% $.gb_per_osd * $.num_osds %> > nonceph_cores: <% $.num_cores - int($.cores_per_osd * $.num_osds) %> > on-success: calc_step_2 > > calc_step_2: > publish: > num_guests: <% int(($.memory_gb - $.ceph_mem_gb) / ($.avg_guest_size_gb + $.gb_overhead_per_guest)) %> > guest_vcpus: <% $.nonceph_cores / $.avg_guest_util %> > on-success: calc_step_3 > > calc_step_3: > publish: > reserved_host_memory: <% $.MB_PER_GB * int($.ceph_mem_gb + ($.num_guests * $.gb_overhead_per_guest)) %> > cpu_allocation_ratio: <% $.guest_vcpus / $.num_cores %> > on-success: validate_results > > validate_results: > publish: > # Verify whether HCI is viable: > # - At least 80% of the memory is reserved for Ceph and guest overhead > # - At least half of the CPU cores must be available to Nova > mem_ok: <% $.reserved_host_memory <= ($.memory_mb * 0.8) %> > cpu_ok: <% $.cpu_allocation_ratio >= 0.5 %> > on-success: > - set_failed_insufficient_mem: <% not $.mem_ok %> > - set_failed_insufficient_cpu: <% not $.cpu_ok %> > - publish_hci_parameters: <% $.mem_ok and $.cpu_ok %> > > publish_hci_parameters: > publish: > # TODO(abishop): Update this when the cpu_allocation_ratio can be set > # via a THT parameter (no such parameter currently exists). Until a > # THT parameter exists, use hiera data to set the cpu_allocation_ratio. > hci_parameters: <% dict(concat($.role_name, 'Parameters') => dict('NovaReservedHostMemory' => $.reserved_host_memory)) + dict(concat($.role_name, 'ExtraConfig') => dict('nova::cpu_allocation_ratio' => $.cpu_allocation_ratio)) %> > > set_failed_invalid_hci_profile: > publish: > message: "'<% $.hci_profile %>' is not a valid HCI profile." > on-success: fail > > set_failed_invalid_average_guest_memory_size_in_mb: > publish: > message: "'<% $.average_guest_memory_size_in_mb %>' is not a valid average_guest_memory_size_in_mb value." > on-success: fail > > set_failed_invalid_gb_overhead_per_guest: > publish: > message: "'<% $.gb_overhead_per_guest %>' is not a valid gb_overhead_per_guest value." > on-success: fail > > set_failed_invalid_gb_per_osd: > publish: > message: "'<% $.gb_per_osd %>' is not a valid gb_per_osd value." > on-success: fail > > set_failed_invalid_cores_per_osd: > publish: > message: "'<% $.cores_per_osd %>' is not a valid cores_per_osd value." > on-success: fail > > set_failed_invalid_average_guest_cpu_utilization_percentage: > publish: > message: "'<% $.average_guest_cpu_utilization_percentage %>' is not a valid average_guest_cpu_utilization_percentage value." > on-success: fail > > set_failed_no_osds: > publish: > message: "No Ceph OSDs found in the overcloud definition ('ceph::profile::params::osds')." > on-success: fail > > set_failed_get_memory_mb: > publish: > message: "Unable to determine the amount of physical memory (no 'memory_mb' found in introspection_data)." > on-success: fail > > set_failed_get_nova_vcpu_count: > publish: > message: <% task(get_nova_vcpu_count).result %> > on-success: fail > > set_failed_get_num_cores: > publish: > message: "Unable to determine the number of CPU cores (no 'cpus' found in introspection_data)." > on-success: fail > > set_failed_insufficient_mem: > publish: > message: "<% $.memory_mb %> MB is not enough memory to run hyperconverged." > on-success: fail > > set_failed_insufficient_cpu: > publish: > message: "<% $.num_cores %> CPU cores are not enough to run hyperconverged." > on-success: fail >' >2018-06-26 09:56:37,188 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 32010 >2018-06-26 09:56:37,229 DEBUG: RESP: [201] Content-Length: 32010 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:37 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.derive_params_formulas.v1\ndescription: TripleO Workflows to derive deployment parameters from the introspected data\n\nworkflows:\n\n\n dpdk_derive_params:\n description: >\n Workflow to derive parameters for DPDK service.\n input:\n - plan\n - role_name\n - hw_data # introspection data\n - user_inputs\n - derived_parameters: {}\n\n output:\n derived_parameters: <% $.derived_parameters.mergeWith($.get('dpdk_parameters', {})) %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_network_config:\n action: tripleo.parameters.get_network_config\n input:\n container: <% $.plan %>\n role_name: <% $.role_name %>\n publish:\n network_configs: <% task().result.get('network_config', []) %>\n on-success: get_dpdk_nics_numa_info\n on-error: set_status_failed_get_network_config\n\n get_dpdk_nics_numa_info:\n action: tripleo.derive_params.get_dpdk_nics_numa_info\n input:\n network_configs: <% $.network_configs %>\n inspect_data: <% $.hw_data %>\n publish:\n dpdk_nics_numa_info: <% task().result %>\n on-success:\n # TODO: Need to remove condtions here\n # adding condition and throw error in action for empty check\n - get_dpdk_nics_numa_nodes: <% $.dpdk_nics_numa_info %>\n - set_status_failed_get_dpdk_nics_numa_info: <% not $.dpdk_nics_numa_info %>\n on-error: set_status_failed_on_error_get_dpdk_nics_numa_info\n\n get_dpdk_nics_numa_nodes:\n publish:\n dpdk_nics_numa_nodes: <% $.dpdk_nics_numa_info.groupBy($.numa_node).select($[0]).orderBy($) %>\n on-success:\n - get_numa_nodes: <% $.dpdk_nics_numa_nodes %>\n - set_status_failed_get_dpdk_nics_numa_nodes: <% not $.dpdk_nics_numa_nodes %>\n\n get_numa_nodes:\n publish:\n numa_nodes: <% $.hw_data.numa_topology.ram.select($.numa_node).orderBy($) %>\n on-success:\n - get_num_phy_cores_per_numa_for_pmd: <% $.numa_nodes %>\n - set_status_failed_get_numa_nodes: <% not $.numa_nodes %>\n\n get_num_phy_cores_per_numa_for_pmd:\n publish:\n num_phy_cores_per_numa_node_for_pmd: <% $.user_inputs.get('num_phy_cores_per_numa_node_for_pmd', 0) %>\n on-success:\n - get_num_cores_per_numa_nodes: <% isInteger($.num_phy_cores_per_numa_node_for_pmd) and $.num_phy_cores_per_numa_node_for_pmd > 0 %>\n - set_status_failed_get_num_phy_cores_per_numa_for_pmd_invalid: <% not isInteger($.num_phy_cores_per_numa_node_for_pmd) %>\n - set_status_failed_get_num_phy_cores_per_numa_for_pmd_not_provided: <% $.num_phy_cores_per_numa_node_for_pmd = 0 %>\n\n # For NUMA node with DPDK nic, number of cores should be used from user input\n # For NUMA node without DPDK nic, number of cores should be 1\n get_num_cores_per_numa_nodes:\n publish:\n num_cores_per_numa_nodes: <% let(dpdk_nics_nodes => $.dpdk_nics_numa_nodes, cores => $.num_phy_cores_per_numa_node_for_pmd) -> $.numa_nodes.select(switch($ in $dpdk_nics_nodes => $cores, not $ in $dpdk_nics_nodes => 1)) %>\n on-success: get_pmd_cpus\n\n get_pmd_cpus:\n action: tripleo.derive_params.get_dpdk_core_list\n input:\n inspect_data: <% $.hw_data %>\n numa_nodes_cores_count: <% $.num_cores_per_numa_nodes %>\n publish:\n pmd_cpus: <% task().result %>\n on-success:\n - get_pmd_cpus_range_list: <% $.pmd_cpus %>\n - set_status_failed_get_pmd_cpus: <% not $.pmd_cpus %>\n on-error: set_status_failed_on_error_get_pmd_cpus\n\n get_pmd_cpus_range_list:\n action: tripleo.derive_params.convert_number_to_range_list\n input:\n num_list: <% $.pmd_cpus %>\n publish:\n pmd_cpus: <% task().result %>\n on-success: get_host_cpus\n on-error: set_status_failed_get_pmd_cpus_range_list\n\n get_host_cpus:\n workflow: tripleo.derive_params_formulas.v1.get_host_cpus\n input:\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n publish:\n host_cpus: <% task().result.get('host_cpus', '') %>\n on-success: get_sock_mem\n on-error: set_status_failed_get_host_cpus\n\n get_sock_mem:\n action: tripleo.derive_params.get_dpdk_socket_memory\n input:\n dpdk_nics_numa_info: <% $.dpdk_nics_numa_info %>\n numa_nodes: <% $.numa_nodes %>\n overhead: <% $.user_inputs.get('overhead', 800) %>\n packet_size_in_buffer: <% 4096*64 %>\n publish:\n sock_mem: <% task().result %>\n on-success:\n - get_dpdk_parameters: <% $.sock_mem %>\n - set_status_failed_get_sock_mem: <% not $.sock_mem %>\n on-error: set_status_failed_on_error_get_sock_mem\n\n get_dpdk_parameters:\n publish:\n dpdk_parameters: <% dict(concat($.role_name, 'Parameters') => dict('OvsPmdCoreList' => $.get('pmd_cpus', ''), 'OvsDpdkCoreList' => $.get('host_cpus', ''), 'OvsDpdkSocketMemory' => $.get('sock_mem', ''))) %>\n\n set_status_failed_get_network_config:\n publish:\n status: FAILED\n message: <% task(get_network_config).result %>\n on-success: fail\n\n set_status_failed_get_dpdk_nics_numa_info:\n publish:\n status: FAILED\n message: \"Unable to determine DPDK NIC's NUMA information\"\n on-success: fail\n\n set_status_failed_on_error_get_dpdk_nics_numa_info:\n publish:\n status: FAILED\n message: <% task(get_dpdk_nics_numa_info).result %>\n on-success: fail\n\n set_status_failed_get_dpdk_nics_numa_nodes:\n publish:\n status: FAILED\n message: \"Unable to determine DPDK NIC's numa nodes\"\n on-success: fail\n\n set_status_failed_get_numa_nodes:\n publish:\n status: FAILED\n message: 'Unable to determine available NUMA nodes'\n on-success: fail\n\n set_status_failed_get_num_phy_cores_per_numa_for_pmd_invalid:\n publish:\n status: FAILED\n message: <% \"num_phy_cores_per_numa_node_for_pmd user input '{0}' is invalid\".format($.num_phy_cores_per_numa_node_for_pmd) %>\n on-success: fail\n\n set_status_failed_get_num_phy_cores_per_numa_for_pmd_not_provided:\n publish:\n status: FAILED\n message: 'num_phy_cores_per_numa_node_for_pmd user input is not provided'\n on-success: fail\n\n set_status_failed_get_pmd_cpus:\n publish:\n status: FAILED\n message: 'Unable to determine OvsPmdCoreList parameter'\n on-success: fail\n\n set_status_failed_on_error_get_pmd_cpus:\n publish:\n status: FAILED\n message: <% task(get_pmd_cpus).result %>\n on-success: fail\n\n set_status_failed_get_pmd_cpus_range_list:\n publish:\n status: FAILED\n message: <% task(get_pmd_cpus_range_list).result %>\n on-success: fail\n\n set_status_failed_get_host_cpus:\n publish:\n status: FAILED\n message: <% task(get_host_cpus).result.get('message', '') %>\n on-success: fail\n\n set_status_failed_get_sock_mem:\n publish:\n status: FAILED\n message: 'Unable to determine OvsDpdkSocketMemory parameter'\n on-success: fail\n\n set_status_failed_on_error_get_sock_mem:\n publish:\n status: FAILED\n message: <% task(get_sock_mem).result %>\n on-success: fail\n\n\n sriov_derive_params:\n description: >\n This workflow derives parameters for the SRIOV feature.\n\n input:\n - role_name\n - hw_data # introspection data\n - derived_parameters: {}\n\n output:\n derived_parameters: <% $.derived_parameters.mergeWith($.get('sriov_parameters', {})) %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_host_cpus:\n workflow: tripleo.derive_params_formulas.v1.get_host_cpus\n input:\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n publish:\n host_cpus: <% task().result.get('host_cpus', '') %>\n on-success: get_sriov_parameters\n on-error: set_status_failed_get_host_cpus\n\n get_sriov_parameters:\n publish:\n # SriovHostCpusList parameter is added temporarily and it's removed later from derived parameters result.\n sriov_parameters: <% dict(concat($.role_name, 'Parameters') => dict('SriovHostCpusList' => $.get('host_cpus', ''))) %>\n\n set_status_failed_get_host_cpus:\n publish:\n status: FAILED\n message: <% task(get_host_cpus).result.get('message', '') %>\n on-success: fail\n\n\n get_host_cpus:\n description: >\n Fetching the host CPU list from the introspection data, and then converting the raw list into a range list.\n\n input:\n - hw_data # introspection data\n\n output:\n host_cpus: <% $.get('host_cpus', '') %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_host_cpus:\n action: tripleo.derive_params.get_host_cpus_list inspect_data=<% $.hw_data %>\n publish:\n host_cpus: <% task().result %>\n on-success:\n - get_host_cpus_range_list: <% $.host_cpus %>\n - set_status_failed_get_host_cpus: <% not $.host_cpus %>\n on-error: set_status_failed_on_error_get_host_cpus\n\n get_host_cpus_range_list:\n action: tripleo.derive_params.convert_number_to_range_list\n input:\n num_list: <% $.host_cpus %>\n publish:\n host_cpus: <% task().result %>\n on-error: set_status_failed_get_host_cpus_range_list\n\n set_status_failed_get_host_cpus:\n publish:\n status: FAILED\n message: 'Unable to determine host cpus'\n on-success: fail\n\n set_status_failed_on_error_get_host_cpus:\n publish:\n status: FAILED\n message: <% task(get_host_cpus).result %>\n on-success: fail\n\n set_status_failed_get_host_cpus_range_list:\n publish:\n status: FAILED\n message: <% task(get_host_cpus_range_list).result %>\n on-success: fail\n\n\n host_derive_params:\n description: >\n This workflow derives parameters for the Host process, and is mainly associated with CPU pinning and huge memory pages.\n This workflow can be dependent on any feature or also can be invoked individually as well.\n\n input:\n - role_name\n - hw_data # introspection data\n - user_inputs\n - derived_parameters: {}\n\n output:\n derived_parameters: <% $.derived_parameters.mergeWith($.get('host_parameters', {})) %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_cpus:\n publish:\n cpus: <% $.hw_data.numa_topology.cpus %>\n on-success:\n - get_role_derive_params: <% $.cpus %>\n - set_status_failed_get_cpus: <% not $.cpus %>\n\n get_role_derive_params:\n publish:\n role_derive_params: <% $.derived_parameters.get(concat($.role_name, 'Parameters'), {}) %>\n # removing the role parameters (eg. ComputeParameters) in derived_parameters dictionary since already copied in role_derive_params.\n derived_parameters: <% $.derived_parameters.delete(concat($.role_name, 'Parameters')) %>\n on-success: get_host_cpus\n\n get_host_cpus:\n publish:\n host_cpus: <% $.role_derive_params.get('OvsDpdkCoreList', '') or $.role_derive_params.get('SriovHostCpusList', '') %>\n # SriovHostCpusList parameter is added temporarily for host_cpus and not needed in derived_parameters result.\n # SriovHostCpusList parameter is deleted in derived_parameters list and adding the updated role parameters\n # back in the derived_parameters.\n derived_parameters: <% $.derived_parameters + dict(concat($.role_name, 'Parameters') => $.role_derive_params.delete('SriovHostCpusList')) %>\n on-success: get_host_dpdk_combined_cpus\n\n get_host_dpdk_combined_cpus:\n publish:\n host_dpdk_combined_cpus: <% let(pmd_cpus => $.role_derive_params.get('OvsPmdCoreList', '')) -> switch($pmd_cpus => concat($pmd_cpus, ',', $.host_cpus), not $pmd_cpus => $.host_cpus) %>\n reserved_cpus: []\n on-success:\n - get_host_dpdk_combined_cpus_num_list: <% $.host_dpdk_combined_cpus %>\n - set_status_failed_get_host_dpdk_combined_cpus: <% not $.host_dpdk_combined_cpus %>\n\n get_host_dpdk_combined_cpus_num_list:\n action: tripleo.derive_params.convert_range_to_number_list\n input:\n range_list: <% $.host_dpdk_combined_cpus %>\n publish:\n host_dpdk_combined_cpus: <% task().result %>\n reserved_cpus: <% task().result.split(',') %>\n on-success: get_nova_cpus\n on-error: set_status_failed_get_host_dpdk_combined_cpus_num_list\n\n get_nova_cpus:\n publish:\n nova_cpus: <% let(reserved_cpus => $.reserved_cpus) -> $.cpus.select($.thread_siblings).flatten().where(not (str($) in $reserved_cpus)).join(',') %>\n on-success:\n - get_isol_cpus: <% $.nova_cpus %>\n - set_status_failed_get_nova_cpus: <% not $.nova_cpus %>\n\n # concatinates OvsPmdCoreList range format and NovaVcpuPinSet in range format. it may not be in perfect range format.\n # example: concatinates '12-15,19' and 16-18' ranges '12-15,19,16-18'\n get_isol_cpus:\n publish:\n isol_cpus: <% let(pmd_cpus => $.role_derive_params.get('OvsPmdCoreList','')) -> switch($pmd_cpus => concat($pmd_cpus, ',', $.nova_cpus), not $pmd_cpus => $.nova_cpus) %>\n on-success: get_isol_cpus_num_list\n\n # Gets the isol_cpus in the number list\n # example: '12-15,19,16-18' into '12,13,14,15,16,17,18,19'\n get_isol_cpus_num_list:\n action: tripleo.derive_params.convert_range_to_number_list\n input:\n range_list: <% $.isol_cpus %>\n publish:\n isol_cpus: <% task().result %>\n on-success: get_nova_cpus_range_list\n on-error: set_status_failed_get_isol_cpus_num_list\n\n get_nova_cpus_range_list:\n action: tripleo.derive_params.convert_number_to_range_list\n input:\n num_list: <% $.nova_cpus %>\n publish:\n nova_cpus: <% task().result %>\n on-success: get_isol_cpus_range_list\n on-error: set_status_failed_get_nova_cpus_range_list\n\n # converts number format isol_cpus into range format\n # example: '12,13,14,15,16,17,18,19' into '12-19'\n get_isol_cpus_range_list:\n action: tripleo.derive_params.convert_number_to_range_list\n input:\n num_list: <% $.isol_cpus %>\n publish:\n isol_cpus: <% task().result %>\n on-success: get_host_mem\n on-error: set_status_failed_get_isol_cpus_range_list\n\n get_host_mem:\n publish:\n host_mem: <% $.user_inputs.get('host_mem_default', 4096) %>\n on-success: check_default_hugepage_supported\n\n check_default_hugepage_supported:\n publish:\n default_hugepage_supported: <% $.hw_data.get('inventory', {}).get('cpu', {}).get('flags', []).contains('pdpe1gb') %>\n on-success:\n - get_total_memory: <% $.default_hugepage_supported %>\n - set_status_failed_check_default_hugepage_supported: <% not $.default_hugepage_supported %>\n\n get_total_memory:\n publish:\n total_memory: <% $.hw_data.get('inventory', {}).get('memory', {}).get('physical_mb', 0) %>\n on-success:\n - get_hugepage_allocation_percentage: <% $.total_memory %>\n - set_status_failed_get_total_memory: <% not $.total_memory %>\n\n get_hugepage_allocation_percentage:\n publish:\n huge_page_allocation_percentage: <% $.user_inputs.get('huge_page_allocation_percentage', 0) %>\n on-success:\n - get_hugepages: <% isInteger($.huge_page_allocation_percentage) and $.huge_page_allocation_percentage > 0 %>\n - set_status_failed_get_hugepage_allocation_percentage_invalid: <% not isInteger($.huge_page_allocation_percentage) %>\n - set_status_failed_get_hugepage_allocation_percentage_not_provided: <% $.huge_page_allocation_percentage = 0 %>\n\n get_hugepages:\n publish:\n hugepages: <% let(huge_page_perc => float($.huge_page_allocation_percentage)/100)-> int((($.total_memory/1024)-4) * $huge_page_perc) %>\n on-success:\n - get_cpu_model: <% $.hugepages %>\n - set_status_failed_get_hugepages: <% not $.hugepages %>\n\n get_cpu_model:\n publish:\n intel_cpu_model: <% $.hw_data.get('inventory', {}).get('cpu', {}).get('model_name', '').startsWith('Intel') %>\n on-success: get_iommu_info\n\n get_iommu_info:\n publish:\n iommu_info: <% switch($.intel_cpu_model => 'intel_iommu=on iommu=pt', not $.intel_cpu_model => '') %>\n on-success: get_kernel_args\n\n get_kernel_args:\n publish:\n kernel_args: <% concat('default_hugepagesz=1GB hugepagesz=1G ', 'hugepages=', str($.hugepages), ' ', $.iommu_info, ' isolcpus=', $.isol_cpus) %>\n on-success: get_host_parameters\n\n get_host_parameters:\n publish:\n host_parameters: <% dict(concat($.role_name, 'Parameters') => dict('NovaVcpuPinSet' => $.get('nova_cpus', ''), 'NovaReservedHostMemory' => $.get('host_mem', ''), 'KernelArgs' => $.get('kernel_args', ''), 'IsolCpusList' => $.get('isol_cpus', ''))) %>\n\n set_status_failed_get_cpus:\n publish:\n status: FAILED\n message: \"Unable to determine CPU's on NUMA nodes\"\n on-success: fail\n\n set_status_failed_get_host_dpdk_combined_cpus:\n publish:\n status: FAILED\n message: 'Unable to combine host and dpdk cpus list'\n on-success: fail\n\n set_status_failed_get_host_dpdk_combined_cpus_num_list:\n publish:\n status: FAILED\n message: <% task(get_host_dpdk_combined_cpus_num_list).result %>\n on-success: fail\n\n set_status_failed_get_nova_cpus:\n publish:\n status: FAILED\n message: 'Unable to determine nova vcpu pin set'\n on-success: fail\n\n set_status_failed_get_nova_cpus_range_list:\n publish:\n status: FAILED\n message: <% task(get_nova_cpus_range_list).result %>\n on-success: fail\n\n set_status_failed_get_isol_cpus_num_list:\n publish:\n status: FAILED\n message: <% task(get_isol_cpus_num_list).result %>\n on-success: fail\n\n set_status_failed_get_isol_cpus_range_list:\n publish:\n status: FAILED\n message: <% task(get_isol_cpus_range_list).result %>\n on-success: fail\n\n set_status_failed_check_default_hugepage_supported:\n publish:\n status: FAILED\n message: 'default huge page size 1GB is not supported'\n on-success: fail\n\n set_status_failed_get_total_memory:\n publish:\n status: FAILED\n message: 'Unable to determine total memory'\n on-success: fail\n\n set_status_failed_get_hugepage_allocation_percentage_invalid:\n publish:\n status: FAILED\n message: <% \"huge_page_allocation_percentage user input '{0}' is invalid\".format($.huge_page_allocation_percentage) %>\n on-success: fail\n\n set_status_failed_get_hugepage_allocation_percentage_not_provided:\n publish:\n status: FAILED\n message: 'huge_page_allocation_percentage user input is not provided'\n on-success: fail\n\n set_status_failed_get_hugepages:\n publish:\n status: FAILED\n message: 'Unable to determine huge pages'\n on-success: fail\n\n\n hci_derive_params:\n description: Derive the deployment parameters for HCI\n input:\n - role_name\n - environment_parameters\n - heat_resource_tree\n - introspection_data\n - user_inputs\n - derived_parameters: {}\n\n output:\n derived_parameters: <% $.derived_parameters.mergeWith($.get('hci_parameters', {})) %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_hci_inputs:\n publish:\n hci_profile: <% $.user_inputs.get('hci_profile', '') %>\n hci_profile_config: <% $.user_inputs.get('hci_profile_config', {}) %>\n MB_PER_GB: 1024\n on-success:\n - get_average_guest_memory_size_in_mb: <% $.hci_profile and $.hci_profile_config.get($.hci_profile, {}) %>\n - set_failed_invalid_hci_profile: <% $.hci_profile and not $.hci_profile_config.get($.hci_profile, {}) %>\n # When no hci_profile is specified, the workflow terminates without deriving any HCI parameters.\n\n get_average_guest_memory_size_in_mb:\n publish:\n average_guest_memory_size_in_mb: <% $.hci_profile_config.get($.hci_profile, {}).get('average_guest_memory_size_in_mb', 0) %>\n on-success:\n - get_average_guest_cpu_utilization_percentage: <% isInteger($.average_guest_memory_size_in_mb) %>\n - set_failed_invalid_average_guest_memory_size_in_mb: <% not isInteger($.average_guest_memory_size_in_mb) %>\n\n get_average_guest_cpu_utilization_percentage:\n publish:\n average_guest_cpu_utilization_percentage: <% $.hci_profile_config.get($.hci_profile, {}).get('average_guest_cpu_utilization_percentage', 0) %>\n on-success:\n - get_gb_overhead_per_guest: <% isInteger($.average_guest_cpu_utilization_percentage) %>\n - set_failed_invalid_average_guest_cpu_utilization_percentage: <% not isInteger($.average_guest_cpu_utilization_percentage) %>\n\n get_gb_overhead_per_guest:\n publish:\n gb_overhead_per_guest: <% $.user_inputs.get('gb_overhead_per_guest', 0.5) %>\n on-success:\n - get_gb_per_osd: <% isNumber($.gb_overhead_per_guest) %>\n - set_failed_invalid_gb_overhead_per_guest: <% not isNumber($.gb_overhead_per_guest) %>\n\n get_gb_per_osd:\n publish:\n gb_per_osd: <% $.user_inputs.get('gb_per_osd', 3) %>\n on-success:\n - get_cores_per_osd: <% isNumber($.gb_per_osd) %>\n - set_failed_invalid_gb_per_osd: <% not isNumber($.gb_per_osd) %>\n\n get_cores_per_osd:\n publish:\n cores_per_osd: <% $.user_inputs.get('cores_per_osd', 1.0) %>\n on-success:\n - get_extra_configs: <% isNumber($.cores_per_osd) %>\n - set_failed_invalid_cores_per_osd: <% not isNumber($.cores_per_osd) %>\n\n get_extra_configs:\n publish:\n extra_config: <% $.environment_parameters.get('ExtraConfig', {}) %>\n role_extra_config: <% $.environment_parameters.get(concat($.role_name, 'ExtraConfig'), {}) %>\n role_env_params: <% $.environment_parameters.get(concat($.role_name, 'Parameters'), {}) %>\n role_derive_params: <% $.derived_parameters.get(concat($.role_name, 'Parameters'), {}) %>\n on-success: get_num_osds\n\n get_num_osds:\n publish:\n num_osds: <% $.heat_resource_tree.parameters.get('CephAnsibleDisksConfig', {}).get('default', {}).get('devices', []).count() %>\n on-success:\n - get_memory_mb: <% $.num_osds %>\n # If there's no CephAnsibleDisksConfig then look for OSD configuration in hiera data\n - get_num_osds_from_hiera: <% not $.num_osds %>\n\n get_num_osds_from_hiera:\n publish:\n num_osds: <% $.role_extra_config.get('ceph::profile::params::osds', $.extra_config.get('ceph::profile::params::osds', {})).keys().count() %>\n on-success:\n - get_memory_mb: <% $.num_osds %>\n - set_failed_no_osds: <% not $.num_osds %>\n\n get_memory_mb:\n publish:\n memory_mb: <% $.introspection_data.get('memory_mb', 0) %>\n on-success:\n - get_nova_vcpu_pin_set: <% $.memory_mb %>\n - set_failed_get_memory_mb: <% not $.memory_mb %>\n\n # Determine the number of CPU cores available to Nova and Ceph. If\n # NovaVcpuPinSet is defined then use the number of vCPUs in the set,\n # otherwise use all of the cores identified in the introspection data.\n\n get_nova_vcpu_pin_set:\n publish:\n # NovaVcpuPinSet can be defined in multiple locations, and it's\n # important to select the value in order of precedence:\n #\n # 1) User specified value for this role\n # 2) User specified default value for all roles\n # 3) Value derived by another derived parameters workflow\n nova_vcpu_pin_set: <% $.role_env_params.get('NovaVcpuPinSet', $.environment_parameters.get('NovaVcpuPinSet', $.role_derive_params.get('NovaVcpuPinSet', ''))) %>\n on-success:\n - get_nova_vcpu_count: <% $.nova_vcpu_pin_set %>\n - get_num_cores: <% not $.nova_vcpu_pin_set %>\n\n get_nova_vcpu_count:\n action: tripleo.derive_params.convert_range_to_number_list\n input:\n range_list: <% $.nova_vcpu_pin_set %>\n publish:\n num_cores: <% task().result.split(',').count() %>\n on-success: calculate_nova_parameters\n on-error: set_failed_get_nova_vcpu_count\n\n get_num_cores:\n publish:\n num_cores: <% $.introspection_data.get('cpus', 0) %>\n on-success:\n - calculate_nova_parameters: <% $.num_cores %>\n - set_failed_get_num_cores: <% not $.num_cores %>\n\n # HCI calculations are broken into multiple steps. This is necessary\n # because variables published by a Mistral task are not available\n # for use by that same task. Variables computed and published in a task\n # are only available in subsequent tasks.\n #\n # The HCI calculations compute two Nova parameters:\n # - reserved_host_memory\n # - cpu_allocation_ratio\n #\n # The reserved_host_memory calculation computes the amount of memory\n # that needs to be reserved for Ceph and the total amount of \"guest\n # overhead\" memory that is based on the anticipated number of guests.\n # Psuedo-code for the calculation (disregarding MB and GB units) is\n # as follows:\n #\n # ceph_memory = mem_per_osd * num_osds\n # nova_memory = total_memory - ceph_memory\n # num_guests = nova_memory /\n # (average_guest_memory_size + overhead_per_guest)\n # reserved_memory = ceph_memory + (num_guests * overhead_per_guest)\n #\n # The cpu_allocation_ratio calculation is similar in that it takes into\n # account the number of cores that must be reserved for Ceph.\n #\n # ceph_cores = cores_per_osd * num_osds\n # guest_cores = num_cores - ceph_cores\n # guest_vcpus = guest_cores / average_guest_utilization\n # cpu_allocation_ratio = guest_vcpus / num_cores\n\n calculate_nova_parameters:\n publish:\n avg_guest_util: <% $.average_guest_cpu_utilization_percentage / 100.0 %>\n avg_guest_size_gb: <% $.average_guest_memory_size_in_mb / float($.MB_PER_GB) %>\n memory_gb: <% $.memory_mb / float($.MB_PER_GB) %>\n ceph_mem_gb: <% $.gb_per_osd * $.num_osds %>\n nonceph_cores: <% $.num_cores - int($.cores_per_osd * $.num_osds) %>\n on-success: calc_step_2\n\n calc_step_2:\n publish:\n num_guests: <% int(($.memory_gb - $.ceph_mem_gb) / ($.avg_guest_size_gb + $.gb_overhead_per_guest)) %>\n guest_vcpus: <% $.nonceph_cores / $.avg_guest_util %>\n on-success: calc_step_3\n\n calc_step_3:\n publish:\n reserved_host_memory: <% $.MB_PER_GB * int($.ceph_mem_gb + ($.num_guests * $.gb_overhead_per_guest)) %>\n cpu_allocation_ratio: <% $.guest_vcpus / $.num_cores %>\n on-success: validate_results\n\n validate_results:\n publish:\n # Verify whether HCI is viable:\n # - At least 80% of the memory is reserved for Ceph and guest overhead\n # - At least half of the CPU cores must be available to Nova\n mem_ok: <% $.reserved_host_memory <= ($.memory_mb * 0.8) %>\n cpu_ok: <% $.cpu_allocation_ratio >= 0.5 %>\n on-success:\n - set_failed_insufficient_mem: <% not $.mem_ok %>\n - set_failed_insufficient_cpu: <% not $.cpu_ok %>\n - publish_hci_parameters: <% $.mem_ok and $.cpu_ok %>\n\n publish_hci_parameters:\n publish:\n # TODO(abishop): Update this when the cpu_allocation_ratio can be set\n # via a THT parameter (no such parameter currently exists). Until a\n # THT parameter exists, use hiera data to set the cpu_allocation_ratio.\n hci_parameters: <% dict(concat($.role_name, 'Parameters') => dict('NovaReservedHostMemory' => $.reserved_host_memory)) + dict(concat($.role_name, 'ExtraConfig') => dict('nova::cpu_allocation_ratio' => $.cpu_allocation_ratio)) %>\n\n set_failed_invalid_hci_profile:\n publish:\n message: \"'<% $.hci_profile %>' is not a valid HCI profile.\"\n on-success: fail\n\n set_failed_invalid_average_guest_memory_size_in_mb:\n publish:\n message: \"'<% $.average_guest_memory_size_in_mb %>' is not a valid average_guest_memory_size_in_mb value.\"\n on-success: fail\n\n set_failed_invalid_gb_overhead_per_guest:\n publish:\n message: \"'<% $.gb_overhead_per_guest %>' is not a valid gb_overhead_per_guest value.\"\n on-success: fail\n\n set_failed_invalid_gb_per_osd:\n publish:\n message: \"'<% $.gb_per_osd %>' is not a valid gb_per_osd value.\"\n on-success: fail\n\n set_failed_invalid_cores_per_osd:\n publish:\n message: \"'<% $.cores_per_osd %>' is not a valid cores_per_osd value.\"\n on-success: fail\n\n set_failed_invalid_average_guest_cpu_utilization_percentage:\n publish:\n message: \"'<% $.average_guest_cpu_utilization_percentage %>' is not a valid average_guest_cpu_utilization_percentage value.\"\n on-success: fail\n\n set_failed_no_osds:\n publish:\n message: \"No Ceph OSDs found in the overcloud definition ('ceph::profile::params::osds').\"\n on-success: fail\n\n set_failed_get_memory_mb:\n publish:\n message: \"Unable to determine the amount of physical memory (no 'memory_mb' found in introspection_data).\"\n on-success: fail\n\n set_failed_get_nova_vcpu_count:\n publish:\n message: <% task(get_nova_vcpu_count).result %>\n on-success: fail\n\n set_failed_get_num_cores:\n publish:\n message: \"Unable to determine the number of CPU cores (no 'cpus' found in introspection_data).\"\n on-success: fail\n\n set_failed_insufficient_mem:\n publish:\n message: \"<% $.memory_mb %> MB is not enough memory to run hyperconverged.\"\n on-success: fail\n\n set_failed_insufficient_cpu:\n publish:\n message: \"<% $.num_cores %> CPU cores are not enough to run hyperconverged.\"\n on-success: fail\n", "name": "tripleo.derive_params_formulas.v1", "tags": [], "created_at": "2018-06-26 04:26:37", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "f2b50db0-7a86-4b76-a01f-84318350cb38"} > >2018-06-26 09:56:37,229 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:37,231 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.plan_management.v1 >description: TripleO Overcloud Deployment Workflows v1 > >workflows: > > create_default_deployment_plan: > description: > > This workflow exists to maintain backwards compatibility in pike. This > workflow will likely be removed in queens in favor of create_deployment_plan. > input: > - container > - queue_name: tripleo > - generate_passwords: true > tags: > - tripleo-common-managed > tasks: > call_create_deployment_plan: > workflow: tripleo.plan_management.v1.create_deployment_plan > on-success: set_status_success > on-error: call_create_deployment_plan_set_status_failed > input: > container: <% $.container %> > queue_name: <% $.queue_name %> > generate_passwords: <% $.generate_passwords %> > use_default_templates: true > > set_status_success: > on-success: notify_zaqar > publish: > status: SUCCESS > message: <% task(call_create_deployment_plan).result %> > > call_create_deployment_plan_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(call_create_deployment_plan).result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.create_default_deployment_plan > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > create_deployment_plan: > description: > > This workflow provides the capability to create a deployment plan using > the default heat templates provided in a standard TripleO undercloud > deployment, heat templates contained in an external git repository, or a > swift container that already contains templates. > input: > - container > - source_url: null > - queue_name: tripleo > - generate_passwords: true > - use_default_templates: false > > tags: > - tripleo-common-managed > > tasks: > container_required_check: > description: > > If using the default templates or importing templates from a git > repository, a new container needs to be created. If using an existing > container containing templates, skip straight to create_plan. > on-success: > - verify_container_doesnt_exist: <% $.use_default_templates or $.source_url %> > - create_plan: <% $.use_default_templates = false and $.source_url = null %> > > verify_container_doesnt_exist: > action: swift.head_container container=<% $.container %> > on-success: notify_zaqar > on-error: create_container > publish: > status: FAILED > message: "Unable to create plan. The Swift container already exists" > > create_container: > action: tripleo.plan.create_container container=<% $.container %> > on-success: templates_source_check > on-error: create_container_set_status_failed > > cleanup_temporary_files: > action: tripleo.git.clean container=<% $.container %> > > templates_source_check: > on-success: > - upload_default_templates: <% $.use_default_templates = true %> > - clone_git_repo: <% $.source_url != null %> > > clone_git_repo: > action: tripleo.git.clone container=<% $.container %> url=<% $.source_url %> > on-success: upload_templates_directory > on-error: clone_git_repo_set_status_failed > > upload_templates_directory: > action: tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %> > on-success: create_plan > on-complete: cleanup_temporary_files > on-error: upload_templates_directory_set_status_failed > > upload_default_templates: > action: tripleo.templates.upload container=<% $.container %> > on-success: create_plan > on-error: upload_to_container_set_status_failed > > create_plan: > on-success: > - ensure_passwords_exist: <% $.generate_passwords = true %> > - add_root_stack_name: <% $.generate_passwords != true %> > > ensure_passwords_exist: > action: tripleo.parameters.generate_passwords container=<% $.container %> > on-success: add_root_stack_name > on-error: ensure_passwords_exist_set_status_failed > > add_root_stack_name: > action: tripleo.parameters.update > input: > container: <% $.container %> > parameters: > RootStackName: <% $.container %> > on-success: container_images_prepare > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > container_images_prepare: > description: > > Populate all container image parameters with default values. > action: tripleo.container_images.prepare container=<% $.container %> > on-success: process_templates > on-error: container_images_prepare_set_status_failed > > process_templates: > action: tripleo.templates.process container=<% $.container %> > on-success: set_status_success > on-error: process_templates_set_status_failed > > set_status_success: > on-success: notify_zaqar > publish: > status: SUCCESS > message: 'Plan created.' > > create_container_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(create_container).result %> > > clone_git_repo_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(clone_git_repo).result %> > > upload_templates_directory_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(upload_templates_directory).result %> > > upload_to_container_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(upload_default_templates).result %> > > ensure_passwords_exist_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(ensure_passwords_exist).result %> > > process_templates_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(process_templates).result %> > > container_images_prepare_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(container_images_prepare).result %> > > notify_zaqar: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.create_deployment_plan > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > update_deployment_plan: > input: > - container > - source_url: null > - queue_name: tripleo > - generate_passwords: true > - plan_environment: null > tags: > - tripleo-common-managed > tasks: > templates_source_check: > on-success: > - update_plan: <% $.source_url = null %> > - clone_git_repo: <% $.source_url != null %> > > clone_git_repo: > action: tripleo.git.clone container=<% $.container %> url=<% $.source_url %> > on-success: upload_templates_directory > on-error: clone_git_repo_set_status_failed > > upload_templates_directory: > action: tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %> > on-success: create_swift_rings_backup_plan > on-complete: cleanup_temporary_files > on-error: upload_templates_directory_set_status_failed > > cleanup_temporary_files: > action: tripleo.git.clean container=<% $.container %> > > create_swift_rings_backup_plan: > workflow: tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan > on-success: update_plan > on-error: create_swift_rings_backup_plan_set_status_failed > input: > container: <% $.container %> > queue_name: <% $.queue_name %> > use_default_templates: true > > update_plan: > on-success: > - ensure_passwords_exist: <% $.generate_passwords = true %> > - container_images_prepare: <% $.generate_passwords != true %> > > ensure_passwords_exist: > action: tripleo.parameters.generate_passwords container=<% $.container %> > on-success: container_images_prepare > on-error: ensure_passwords_exist_set_status_failed > > container_images_prepare: > description: > > Populate all container image parameters with default values. > action: tripleo.container_images.prepare container=<% $.container %> > on-success: process_templates > on-error: container_images_prepare_set_status_failed > > process_templates: > action: tripleo.templates.process container=<% $.container %> > on-success: > - set_status_success: <% $.plan_environment = null %> > - upload_plan_environment: <% $.plan_environment != null %> > on-error: process_templates_set_status_failed > > upload_plan_environment: > action: tripleo.templates.upload_plan_environment container=<% $.container %> plan_environment=<% $.plan_environment %> > on-success: set_status_success > on-error: process_templates_set_status_failed > > set_status_success: > on-success: notify_zaqar > publish: > status: SUCCESS > message: 'Plan updated.' > > create_swift_rings_backup_plan_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(create_swift_rings_backup_plan).result %> > > clone_git_repo_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(clone_git_repo).result %> > > upload_templates_directory_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(upload_templates_directory).result %> > > process_templates_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(process_templates).result %> > > ensure_passwords_exist_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(ensure_passwords_exist).result %> > > container_images_prepare_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(container_images_prepare).result %> > > notify_zaqar: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.update_deployment_plan > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > delete_deployment_plan: > description: > > Deletes a plan by deleting the container matching plan_name. It will > not delete the plan if a stack exists with the same name. > > tags: > - tripleo-common-managed > > input: > - container: overcloud > - queue_name: tripleo > > tasks: > delete_plan: > action: tripleo.plan.delete container=<% $.container %> > on-complete: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > publish: > status: SUCCESS > message: <% task().result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.delete_deployment_plan > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > > get_passwords: > description: Retrieves passwords for a given plan > input: > - container > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > verify_container_exists: > action: swift.head_container container=<% $.container %> > on-success: get_environment_passwords > on-error: verify_container_set_status_failed > > get_environment_passwords: > action: tripleo.parameters.get_passwords container=<% $.container %> > on-success: get_passwords_set_status_success > on-error: get_passwords_set_status_failed > > get_passwords_set_status_success: > on-success: notify_zaqar > publish: > status: SUCCESS > message: <% task(get_environment_passwords).result %> > > get_passwords_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(get_environment_passwords).result %> > > verify_container_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(verify_container_exists).result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.get_passwords > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > export_deployment_plan: > description: Creates an export tarball for a given plan > input: > - plan > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > export_plan: > action: tripleo.plan.export > input: > plan: <% $.plan %> > delete_after: 3600 > exports_container: "plan-exports" > on-success: create_tempurl > on-error: export_plan_set_status_failed > > create_tempurl: > action: tripleo.swift.tempurl > on-success: set_status_success > on-error: create_tempurl_set_status_failed > input: > container: "plan-exports" > obj: "<% $.plan %>.tar.gz" > valid: 3600 > > set_status_success: > on-success: notify_zaqar > publish: > status: SUCCESS > message: <% task(create_tempurl).result %> > tempurl: <% task(create_tempurl).result %> > > export_plan_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(export_plan).result %> > > create_tempurl_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(create_tempurl).result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.export_deployment_plan > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > tempurl: <% $.get('tempurl', '') %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > get_deprecated_parameters: > description: Gets the list of deprecated parameters in the whole of the plan including nested stack > input: > - container: overcloud > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > get_flatten_data: > action: tripleo.parameters.get_flatten container=<% $.container %> > on-success: get_deprecated_params > on-error: set_status_failed_get_flatten_data > publish: > user_params: <% task().result.environment_parameters %> > plan_params: <% task().result.heat_resource_tree.parameters.keys() %> > parameter_groups: <% task().result.heat_resource_tree.resources.values().where( $.get('parameter_groups') ).select($.parameter_groups).flatten() %> > > get_deprecated_params: > on-success: check_if_user_param_has_deprecated > publish: > deprecated_params: <% $.parameter_groups.where($.get('label') = 'deprecated').select($.parameters).flatten().distinct() %> > > check_if_user_param_has_deprecated: > on-success: get_unused_params > publish: > deprecated_result: <% let(up => $.user_params) -> $.deprecated_params.select( dict('parameter' => $, 'deprecated' => true, 'user_defined' => $up.keys().contains($)) ) %> > > # Get the list of parameters, which are defined by user via environment files's parameter_default, but not part of the plan definition > # It may be possible that the parameter will be used by a service, but the service is not part of the plan. > # In such cases, the parameter will be reported as unused, care should be take to understand whether it is really unused or not. > get_unused_params: > on-success: send_message > publish: > unused_params: <% let(plan_params => $.plan_params) -> $.user_params.keys().where( not $plan_params.contains($) ) %> > > set_status_failed_get_flatten_data: > on-success: send_message > publish: > status: FAILED > message: <% task(get_flatten_data).result %> > > send_message: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.get_deprecated_parameters > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > deprecated: <% $.get('deprecated_result', []) %> > unused: <% $.get('unused_params', []) %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > publish_ui_logs_to_swift: > description: > > This workflow drains a zaqar queue, and publish its messages into a log > file in swift. This workflow is called by cron trigger. > > input: > - logging_queue_name: tripleo-ui-logging > - logging_container: tripleo-ui-logs > > tags: > - tripleo-common-managed > > tasks: > > # We're using a NoOp action to start the workflow. The recursive nature > # of the workflow means that Mistral will refuse to execute it because it > # doesn't know where to begin. > start: > on-success: get_messages > > get_messages: > action: zaqar.claim_messages > on-success: > - format_messages: <% task().result.len() > 0 %> > input: > queue_name: <% $.logging_queue_name %> > ttl: 60 > grace: 60 > publish: > status: SUCCESS > messages: <% task().result %> > message_ids: <% task().result.select($._id) %> > > format_messages: > action: tripleo.logging_to_swift.format_messages > on-success: upload_to_swift > input: > messages: <% $.messages %> > publish: > status: SUCCESS > formatted_messages: <% task().result %> > > upload_to_swift: > action: tripleo.logging_to_swift.publish_ui_log_to_swift > on-success: delete_messages > input: > logging_data: <% $.formatted_messages %> > logging_container: <% $.logging_container %> > publish: > status: SUCCESS > > delete_messages: > action: zaqar.delete_messages > on-success: get_messages > input: > queue_name: <% $.logging_queue_name %> > messages: <% $.message_ids %> > publish: > status: SUCCESS > > download_logs: > description: Creates a tarball with logging data > input: > - queue_name: tripleo > - logging_container: "tripleo-ui-logs" > - downloads_container: "tripleo-ui-logs-downloads" > - delete_after: 3600 > > tags: > - tripleo-common-managed > > tasks: > > publish_logs: > workflow: tripleo.plan_management.v1.publish_ui_logs_to_swift > on-success: prepare_log_download > on-error: publish_logs_set_status_failed > > prepare_log_download: > action: tripleo.logging_to_swift.prepare_log_download > input: > logging_container: <% $.logging_container %> > downloads_container: <% $.downloads_container %> > delete_after: <% $.delete_after %> > on-success: create_tempurl > on-error: download_logs_set_status_failed > publish: > filename: <% task().result %> > > create_tempurl: > action: tripleo.swift.tempurl > on-success: set_status_success > on-error: create_tempurl_set_status_failed > input: > container: <% $.downloads_container %> > obj: <% $.filename %> > valid: 3600 > publish: > tempurl: <% task().result %> > > set_status_success: > on-success: notify_zaqar > publish: > status: SUCCESS > message: <% task(create_tempurl).result %> > tempurl: <% task(create_tempurl).result %> > > publish_logs_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(publish_logs).result %> > > download_logs_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(prepare_log_download).result %> > > create_tempurl_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(create_tempurl).result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.download_logs > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > tempurl: <% $.get('tempurl', '') %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > list_roles: > description: Retrieve the roles_data.yaml and return a usable object > > input: > - container: overcloud > - roles_data_file: 'roles_data.yaml' > - queue_name: tripleo > > output: > roles_data: <% $.roles_data %> > > tags: > - tripleo-common-managed > > tasks: > get_roles_data: > action: swift.get_object > input: > container: <% $.container %> > obj: <% $.roles_data_file %> > publish: > roles_data: <% yaml_parse(task().result.last()) %> > status: SUCCESS > on-success: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.list_roles > payload: > status: <% $.status %> > roles_data: <% $.get('roles_data', {}) %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > list_available_networks: > input: > - container > - queue_name: tripleo > > output: > available_networks: <% $.available_networks %> > > tags: > - tripleo-common-managed > > tasks: > get_network_file_names: > action: swift.get_container > input: > container: <% $.container %> > publish: > network_names: <% task().result[1].where($.name.startsWith('networks/')).where($.name.endsWith('.yaml')).name %> > on-success: get_network_files > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > get_network_files: > with-items: network_name in <% $.network_names %> > action: swift.get_object > on-success: transform_output > on-error: notify_zaqar > input: > container: <% $.container %> > obj: <% $.network_name %> > publish: > status: SUCCESS > available_yaml_networks: <% task().result.select($[1]) %> > publish-on-error: > status: FAILED > message: <% task().result %> > > transform_output: > publish: > status: SUCCESS > available_networks: <% yaml_parse($.available_yaml_networks.join("\n")) %> > publish-on-error: > status: FAILED > message: <% task().result %> > on-complete: notify_zaqar > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.list_available_networks > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > available_networks: <% $.get('available_networks', []) %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > list_networks: > input: > - container: 'overcloud' > - network_data_file: 'network_data.yaml' > - queue_name: tripleo > > output: > network_data: <% $.network_data %> > > tags: > - tripleo-common-managed > > tasks: > get_networks: > action: swift.get_object > input: > container: <% $.container %> > obj: <% $.network_data_file %> > on-success: notify_zaqar > publish: > network_data: <% yaml_parse(task().result.last()) %> > status: SUCCESS > message: <% task().result %> > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > notify_zaqar: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.list_networks > payload: > status: <% $.status %> > network_data: <% $.get('network_data', {}) %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > validate_network_files: > description: Validate network files exist > input: > - container: overcloud > - network_data > - queue_name: tripleo > > output: > network_data: <% $.network_data %> > > tags: > - tripleo-common-managed > > tasks: > get_network_names: > publish: > network_names_lower: <% $.network_data.where($.containsKey('name_lower')).name_lower %> > network_names: <% $.network_data.where(not $.containsKey('name_lower')).name %> > on-success: validate_networks > > validate_networks: > with-items: network in <% $.network_names_lower.concat($.network_names) %> > action: swift.head_object > input: > container: <% $.container %> > obj: network/<% $.network.toLower() %>.yaml > publish: > status: SUCCESS > message: <% task().result %> > on-success: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > notify_zaqar: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.validate_network_files > payload: > status: <% $.status %> > message: <% $.message %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > validate_networks: > description: Validate network files were generated properly and exist > input: > - container: 'overcloud' > - network_data_file: 'network_data.yaml' > - queue_name: tripleo > > output: > network_data: <% $.network_data %> > > tags: > - tripleo-common-managed > > tasks: > get_network_data: > workflow: list_networks > input: > container: <% $.container %> > network_data_file: <% $.network_data_file %> > queue_name: <% $.queue_name %> > publish: > network_data: <% task().result.network_data %> > on-success: validate_networks > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: > notify_zaqar > > validate_networks: > workflow: validate_network_files > input: > container: <% $.container %> > network_data: <% $.network_data %> > queue_name: <% $.queue_name %> > publish: > status: SUCCESS > message: <% task().result %> > on-success: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.validate_networks > payload: > status: <% $.status %> > network_data: <% $.get('network_data', {}) %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > validate_roles: > description: Vaildate roles data exists and is parsable > > input: > - container: overcloud > - roles_data_file: 'roles_data.yaml' > - queue_name: tripleo > > output: > roles_data: <% $.roles_data %> > > tags: > - tripleo-common-managed > > tasks: > get_roles_data: > workflow: list_roles > input: > container: <% $.container %> > roles_data_file: <% $.roles_data_file %> > queue_name: <% $.queue_name %> > publish: > roles_data: <% task().result.roles_data %> > status: SUCCESS > on-success: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: > notify_zaqar > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.validate_networks > payload: > status: <% $.status %> > roles_data: <% $.get('roles_data', '') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > _validate_networks_from_roles: > description: Internal workflow for validating a network exists from a role > > input: > - container: overcloud > - defined_networks > - networks_in_roles > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > validate_network_in_network_data: > publish: > networks_found: <% $.networks_in_roles.toSet().intersect($.defined_networks.toSet()) %> > networks_not_found: <% $.networks_in_roles.toSet().difference($.defined_networks.toSet()) %> > on-success: > - network_not_found: <% $.networks_not_found %> > - notify_zaqar: <% not $.networks_not_found %> > > network_not_found: > publish: > message: <% "Some networks in roles are not defined, {0}".format($.networks_not_found.join(', ')) %> > status: FAILED > on-success: notify_zaqar > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1._validate_networks_from_role > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > validate_roles_and_networks: > description: Vaidate that roles and network data are valid > > input: > - container: overcloud > - roles_data_file: 'roles_data.yaml' > - network_data_file: 'network_data.yaml' > - queue_name: tripleo > > output: > roles_data: <% $.roles_data %> > network_data: <% $.network_data %> > > tags: > - tripleo-common-managed > > tasks: > validate_network_data: > workflow: validate_networks > input: > container: <% $.container %> > network_data_file: <% $.network_data_file %> > queue_name: <% $.queue_name %> > publish: > network_data: <% task().result.network_data %> > on-success: validate_roles_data > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > validate_roles_data: > workflow: validate_roles > input: > container: <% $.container %> > roles_data_file: <% $.roles_data_file %> > queue_name: <% $.queue_name %> > publish: > roles_data: <% task().result.roles_data %> > role_networks_data: <% task().result.roles_data.networks %> > networks_in_roles: <% task().result.roles_data.networks.flatten().distinct() %> > on-success: validate_roles_and_networks > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > validate_roles_and_networks: > workflow: _validate_networks_from_roles > input: > container: <% $.container %> > defined_networks: <% $.network_data.name %> > networks_in_roles: <% $.networks_in_roles %> > queue_name: <% $.queue_name %> > publish: > status: SUCCESS > on-success: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result.message %> > on-error: notify_zaqar > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.validate_roles_and_networks > payload: > status: <% $.status %> > roles_data: <% $.get('roles_data', {}) %> > network_data: <% $.get('network_data', {}) %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > list_available_roles: > input: > - container: overcloud > - queue_name: tripleo > > output: > available_roles: <% $.available_roles %> > > tags: > - tripleo-common-managed > > tasks: > get_role_file_names: > action: swift.get_container > input: > container: <% $.container %> > publish: > role_names: <% task().result[1].where($.name.startsWith('roles/')).where($.name.endsWith('.yaml')).name %> > on-success: get_role_files > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > get_role_files: > with-items: role_name in <% $.role_names %> > action: swift.get_object > on-success: transform_output > on-error: notify_zaqar > input: > container: <% $.container %> > obj: <% $.role_name %> > publish: > status: SUCCESS > available_yaml_roles: <% task().result.select($[1]) %> > publish-on-error: > status: FAILED > message: <% task().result %> > > transform_output: > publish: > status: SUCCESS > available_roles: <% yaml_parse($.available_yaml_roles.join("\n")) %> > publish-on-error: > status: FAILED > message: <% task().result %> > on-complete: notify_zaqar > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.list_available_roles > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > available_roles: <% $.get('available_roles', []) %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > update_roles: > description: > > takes data in json format validates its contents and persists them in > roles_data.yaml, after successful update, templates are regenerated. > input: > - container > - roles > - roles_data_file: 'roles_data.yaml' > - replace_all: false > - queue_name: tripleo > tags: > - tripleo-common-managed > tasks: > get_available_roles: > workflow: list_available_roles > input: > container: <% $.container %> > queue_name: <% $.queue_name%> > publish: > available_roles: <% task().result.available_roles %> > on-success: validate_input > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > validate_input: > description: > > validate the format of input (verify that each role in input has the > required attributes set. check README in roles directory in t-h-t), > validate that roles in input exist in roles directory in t-h-t > action: tripleo.plan.validate_roles > input: > container: <% $.container %> > roles: <% $.roles %> > available_roles: <% $.available_roles %> > on-success: get_network_data > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > get_network_data: > workflow: list_networks > input: > container: <% $.container %> > queue_name: <% $.queue_name %> > publish: > network_data: <% task().result.network_data %> > on-success: validate_network_names > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > validate_network_names: > description: > > validate that Network names assigned to Role exist in > network-data.yaml object in Swift container > workflow: _validate_networks_from_roles > input: > container: <% $.container %> > defined_networks: <% $.network_data.name %> > networks_in_roles: <% $.roles.networks.flatten().distinct() %> > queue_name: <% $.queue_name %> > on-success: get_current_roles > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result.message %> > > get_current_roles: > workflow: list_roles > input: > container: <% $.container %> > roles_data_file: <% $.roles_data_file %> > queue_name: <% $.queue_name %> > publish: > current_roles: <% task().result.roles_data %> > on-success: update_roles_data > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > update_roles_data: > description: > > update roles_data.yaml object in Swift with roles from workflow input > action: tripleo.plan.update_roles > input: > container: <% $.container %> > roles: <% $.roles %> > current_roles: <% $.current_roles %> > replace_all: <% $.replace_all %> > publish: > updated_roles_data: <% task().result.roles %> > on-success: update_roles_data_in_swift > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > update_roles_data_in_swift: > description: > > update roles_data.yaml object in Swift with data from workflow input > action: swift.put_object > input: > container: <% $.container %> > obj: <% $.roles_data_file %> > contents: <% yaml_dump($.updated_roles_data) %> > on-success: regenerate_templates > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > regenerate_templates: > action: tripleo.templates.process container=<% $.container %> > on-success: get_updated_roles > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > get_updated_roles: > workflow: list_roles > input: > container: <% $.container %> > roles_data_file: <% $.roles_data_file %> > publish: > updated_roles: <% task().result.roles_data %> > status: SUCCESS > on-complete: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.roles.v1.update_roles > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > updated_roles: <% $.get('updated_roles', []) %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > select_roles: > description: > > takes a list of role names as input and populates roles_data.yaml in > container in Swift with respective roles from 'roles directory' > input: > - container > - role_names > - roles_data_file: 'roles_data.yaml' > - replace_all: true > - queue_name: tripleo > tags: > - tripleo-common-managed > tasks: > > get_available_roles: > workflow: list_available_roles > input: > container: <% $.container %> > queue_name: <% $.queue_name %> > publish: > available_roles: <% task().result.available_roles %> > on-success: get_current_roles > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > get_current_roles: > workflow: list_roles > input: > container: <% $.container %> > roles_data_file: <% $.roles_data_file %> > queue_name: <% $.queue_name %> > publish: > current_roles: <% task().result.roles_data %> > on-success: gather_roles > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > gather_roles: > description: > > for each role name from the input, check if it exists in > roles_data.yaml, if yes, use that role definition, if not, get the > role definition from roles directory. Use the gathered roles > definitions as input to updateRolesWorkflow - this ensures > configuration of the roles which are already in roles_data.yaml > will not get overridden by data from roles directory > action: tripleo.plan.gather_roles > input: > role_names: <% $.role_names %> > current_roles: <% $.current_roles %> > available_roles: <% $.available_roles %> > publish: > gathered_roles: <% task().result.gathered_roles %> > on-success: call_update_roles_workflow > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > call_update_roles_workflow: > workflow: update_roles > input: > container: <% $.container %> > roles: <% $.gathered_roles %> > roles_data_file: <% $.roles_data_file %> > replace_all: <% $.replace_all %> > queue_name: <% $.queue_name %> > on-complete: notify_zaqar > publish: > selected_roles: <% task().result.updated_roles %> > status: SUCCESS > publish-on-error: > status: FAILED > message: <% task().result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.select_roles > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > selected_roles: <% $.get('selected_roles', []) %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 09:56:39,986 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 47190 >2018-06-26 09:56:40,028 DEBUG: RESP: [201] Content-Length: 47190 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:39 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.plan_management.v1\ndescription: TripleO Overcloud Deployment Workflows v1\n\nworkflows:\n\n create_default_deployment_plan:\n description: >\n This workflow exists to maintain backwards compatibility in pike. This\n workflow will likely be removed in queens in favor of create_deployment_plan.\n input:\n - container\n - queue_name: tripleo\n - generate_passwords: true\n tags:\n - tripleo-common-managed\n tasks:\n call_create_deployment_plan:\n workflow: tripleo.plan_management.v1.create_deployment_plan\n on-success: set_status_success\n on-error: call_create_deployment_plan_set_status_failed\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n generate_passwords: <% $.generate_passwords %>\n use_default_templates: true\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(call_create_deployment_plan).result %>\n\n call_create_deployment_plan_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(call_create_deployment_plan).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.create_default_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n create_deployment_plan:\n description: >\n This workflow provides the capability to create a deployment plan using\n the default heat templates provided in a standard TripleO undercloud\n deployment, heat templates contained in an external git repository, or a\n swift container that already contains templates.\n input:\n - container\n - source_url: null\n - queue_name: tripleo\n - generate_passwords: true\n - use_default_templates: false\n\n tags:\n - tripleo-common-managed\n\n tasks:\n container_required_check:\n description: >\n If using the default templates or importing templates from a git\n repository, a new container needs to be created. If using an existing\n container containing templates, skip straight to create_plan.\n on-success:\n - verify_container_doesnt_exist: <% $.use_default_templates or $.source_url %>\n - create_plan: <% $.use_default_templates = false and $.source_url = null %>\n\n verify_container_doesnt_exist:\n action: swift.head_container container=<% $.container %>\n on-success: notify_zaqar\n on-error: create_container\n publish:\n status: FAILED\n message: \"Unable to create plan. The Swift container already exists\"\n\n create_container:\n action: tripleo.plan.create_container container=<% $.container %>\n on-success: templates_source_check\n on-error: create_container_set_status_failed\n\n cleanup_temporary_files:\n action: tripleo.git.clean container=<% $.container %>\n\n templates_source_check:\n on-success:\n - upload_default_templates: <% $.use_default_templates = true %>\n - clone_git_repo: <% $.source_url != null %>\n\n clone_git_repo:\n action: tripleo.git.clone container=<% $.container %> url=<% $.source_url %>\n on-success: upload_templates_directory\n on-error: clone_git_repo_set_status_failed\n\n upload_templates_directory:\n action: tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>\n on-success: create_plan\n on-complete: cleanup_temporary_files\n on-error: upload_templates_directory_set_status_failed\n\n upload_default_templates:\n action: tripleo.templates.upload container=<% $.container %>\n on-success: create_plan\n on-error: upload_to_container_set_status_failed\n\n create_plan:\n on-success:\n - ensure_passwords_exist: <% $.generate_passwords = true %>\n - add_root_stack_name: <% $.generate_passwords != true %>\n\n ensure_passwords_exist:\n action: tripleo.parameters.generate_passwords container=<% $.container %>\n on-success: add_root_stack_name\n on-error: ensure_passwords_exist_set_status_failed\n\n add_root_stack_name:\n action: tripleo.parameters.update\n input:\n container: <% $.container %>\n parameters:\n RootStackName: <% $.container %>\n on-success: container_images_prepare\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n container_images_prepare:\n description: >\n Populate all container image parameters with default values.\n action: tripleo.container_images.prepare container=<% $.container %>\n on-success: process_templates\n on-error: container_images_prepare_set_status_failed\n\n process_templates:\n action: tripleo.templates.process container=<% $.container %>\n on-success: set_status_success\n on-error: process_templates_set_status_failed\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: 'Plan created.'\n\n create_container_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_container).result %>\n\n clone_git_repo_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(clone_git_repo).result %>\n\n upload_templates_directory_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(upload_templates_directory).result %>\n\n upload_to_container_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(upload_default_templates).result %>\n\n ensure_passwords_exist_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(ensure_passwords_exist).result %>\n\n process_templates_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(process_templates).result %>\n\n container_images_prepare_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(container_images_prepare).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.create_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n update_deployment_plan:\n input:\n - container\n - source_url: null\n - queue_name: tripleo\n - generate_passwords: true\n - plan_environment: null\n tags:\n - tripleo-common-managed\n tasks:\n templates_source_check:\n on-success:\n - update_plan: <% $.source_url = null %>\n - clone_git_repo: <% $.source_url != null %>\n\n clone_git_repo:\n action: tripleo.git.clone container=<% $.container %> url=<% $.source_url %>\n on-success: upload_templates_directory\n on-error: clone_git_repo_set_status_failed\n\n upload_templates_directory:\n action: tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>\n on-success: create_swift_rings_backup_plan\n on-complete: cleanup_temporary_files\n on-error: upload_templates_directory_set_status_failed\n\n cleanup_temporary_files:\n action: tripleo.git.clean container=<% $.container %>\n\n create_swift_rings_backup_plan:\n workflow: tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan\n on-success: update_plan\n on-error: create_swift_rings_backup_plan_set_status_failed\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n use_default_templates: true\n\n update_plan:\n on-success:\n - ensure_passwords_exist: <% $.generate_passwords = true %>\n - container_images_prepare: <% $.generate_passwords != true %>\n\n ensure_passwords_exist:\n action: tripleo.parameters.generate_passwords container=<% $.container %>\n on-success: container_images_prepare\n on-error: ensure_passwords_exist_set_status_failed\n\n container_images_prepare:\n description: >\n Populate all container image parameters with default values.\n action: tripleo.container_images.prepare container=<% $.container %>\n on-success: process_templates\n on-error: container_images_prepare_set_status_failed\n\n process_templates:\n action: tripleo.templates.process container=<% $.container %>\n on-success:\n - set_status_success: <% $.plan_environment = null %>\n - upload_plan_environment: <% $.plan_environment != null %>\n on-error: process_templates_set_status_failed\n\n upload_plan_environment:\n action: tripleo.templates.upload_plan_environment container=<% $.container %> plan_environment=<% $.plan_environment %>\n on-success: set_status_success\n on-error: process_templates_set_status_failed\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: 'Plan updated.'\n\n create_swift_rings_backup_plan_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_swift_rings_backup_plan).result %>\n\n clone_git_repo_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(clone_git_repo).result %>\n\n upload_templates_directory_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(upload_templates_directory).result %>\n\n process_templates_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(process_templates).result %>\n\n ensure_passwords_exist_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(ensure_passwords_exist).result %>\n\n container_images_prepare_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(container_images_prepare).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.update_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n delete_deployment_plan:\n description: >\n Deletes a plan by deleting the container matching plan_name. It will\n not delete the plan if a stack exists with the same name.\n\n tags:\n - tripleo-common-managed\n\n input:\n - container: overcloud\n - queue_name: tripleo\n\n tasks:\n delete_plan:\n action: tripleo.plan.delete container=<% $.container %>\n on-complete: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n publish:\n status: SUCCESS\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.delete_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n\n get_passwords:\n description: Retrieves passwords for a given plan\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n verify_container_exists:\n action: swift.head_container container=<% $.container %>\n on-success: get_environment_passwords\n on-error: verify_container_set_status_failed\n\n get_environment_passwords:\n action: tripleo.parameters.get_passwords container=<% $.container %>\n on-success: get_passwords_set_status_success\n on-error: get_passwords_set_status_failed\n\n get_passwords_set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(get_environment_passwords).result %>\n\n get_passwords_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(get_environment_passwords).result %>\n\n verify_container_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(verify_container_exists).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.get_passwords\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n export_deployment_plan:\n description: Creates an export tarball for a given plan\n input:\n - plan\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n export_plan:\n action: tripleo.plan.export\n input:\n plan: <% $.plan %>\n delete_after: 3600\n exports_container: \"plan-exports\"\n on-success: create_tempurl\n on-error: export_plan_set_status_failed\n\n create_tempurl:\n action: tripleo.swift.tempurl\n on-success: set_status_success\n on-error: create_tempurl_set_status_failed\n input:\n container: \"plan-exports\"\n obj: \"<% $.plan %>.tar.gz\"\n valid: 3600\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(create_tempurl).result %>\n tempurl: <% task(create_tempurl).result %>\n\n export_plan_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(export_plan).result %>\n\n create_tempurl_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_tempurl).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.export_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n tempurl: <% $.get('tempurl', '') %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n get_deprecated_parameters:\n description: Gets the list of deprecated parameters in the whole of the plan including nested stack\n input:\n - container: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_flatten_data:\n action: tripleo.parameters.get_flatten container=<% $.container %>\n on-success: get_deprecated_params\n on-error: set_status_failed_get_flatten_data\n publish:\n user_params: <% task().result.environment_parameters %>\n plan_params: <% task().result.heat_resource_tree.parameters.keys() %>\n parameter_groups: <% task().result.heat_resource_tree.resources.values().where( $.get('parameter_groups') ).select($.parameter_groups).flatten() %>\n\n get_deprecated_params:\n on-success: check_if_user_param_has_deprecated\n publish:\n deprecated_params: <% $.parameter_groups.where($.get('label') = 'deprecated').select($.parameters).flatten().distinct() %>\n\n check_if_user_param_has_deprecated:\n on-success: get_unused_params\n publish:\n deprecated_result: <% let(up => $.user_params) -> $.deprecated_params.select( dict('parameter' => $, 'deprecated' => true, 'user_defined' => $up.keys().contains($)) ) %>\n\n # Get the list of parameters, which are defined by user via environment files's parameter_default, but not part of the plan definition\n # It may be possible that the parameter will be used by a service, but the service is not part of the plan.\n # In such cases, the parameter will be reported as unused, care should be take to understand whether it is really unused or not.\n get_unused_params:\n on-success: send_message\n publish:\n unused_params: <% let(plan_params => $.plan_params) -> $.user_params.keys().where( not $plan_params.contains($) ) %>\n\n set_status_failed_get_flatten_data:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_flatten_data).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.get_deprecated_parameters\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n deprecated: <% $.get('deprecated_result', []) %>\n unused: <% $.get('unused_params', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n publish_ui_logs_to_swift:\n description: >\n This workflow drains a zaqar queue, and publish its messages into a log\n file in swift. This workflow is called by cron trigger.\n\n input:\n - logging_queue_name: tripleo-ui-logging\n - logging_container: tripleo-ui-logs\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n # We're using a NoOp action to start the workflow. The recursive nature\n # of the workflow means that Mistral will refuse to execute it because it\n # doesn't know where to begin.\n start:\n on-success: get_messages\n\n get_messages:\n action: zaqar.claim_messages\n on-success:\n - format_messages: <% task().result.len() > 0 %>\n input:\n queue_name: <% $.logging_queue_name %>\n ttl: 60\n grace: 60\n publish:\n status: SUCCESS\n messages: <% task().result %>\n message_ids: <% task().result.select($._id) %>\n\n format_messages:\n action: tripleo.logging_to_swift.format_messages\n on-success: upload_to_swift\n input:\n messages: <% $.messages %>\n publish:\n status: SUCCESS\n formatted_messages: <% task().result %>\n\n upload_to_swift:\n action: tripleo.logging_to_swift.publish_ui_log_to_swift\n on-success: delete_messages\n input:\n logging_data: <% $.formatted_messages %>\n logging_container: <% $.logging_container %>\n publish:\n status: SUCCESS\n\n delete_messages:\n action: zaqar.delete_messages\n on-success: get_messages\n input:\n queue_name: <% $.logging_queue_name %>\n messages: <% $.message_ids %>\n publish:\n status: SUCCESS\n\n download_logs:\n description: Creates a tarball with logging data\n input:\n - queue_name: tripleo\n - logging_container: \"tripleo-ui-logs\"\n - downloads_container: \"tripleo-ui-logs-downloads\"\n - delete_after: 3600\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n publish_logs:\n workflow: tripleo.plan_management.v1.publish_ui_logs_to_swift\n on-success: prepare_log_download\n on-error: publish_logs_set_status_failed\n\n prepare_log_download:\n action: tripleo.logging_to_swift.prepare_log_download\n input:\n logging_container: <% $.logging_container %>\n downloads_container: <% $.downloads_container %>\n delete_after: <% $.delete_after %>\n on-success: create_tempurl\n on-error: download_logs_set_status_failed\n publish:\n filename: <% task().result %>\n\n create_tempurl:\n action: tripleo.swift.tempurl\n on-success: set_status_success\n on-error: create_tempurl_set_status_failed\n input:\n container: <% $.downloads_container %>\n obj: <% $.filename %>\n valid: 3600\n publish:\n tempurl: <% task().result %>\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(create_tempurl).result %>\n tempurl: <% task(create_tempurl).result %>\n\n publish_logs_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(publish_logs).result %>\n\n download_logs_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(prepare_log_download).result %>\n\n create_tempurl_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_tempurl).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.download_logs\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n tempurl: <% $.get('tempurl', '') %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n list_roles:\n description: Retrieve the roles_data.yaml and return a usable object\n\n input:\n - container: overcloud\n - roles_data_file: 'roles_data.yaml'\n - queue_name: tripleo\n\n output:\n roles_data: <% $.roles_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_roles_data:\n action: swift.get_object\n input:\n container: <% $.container %>\n obj: <% $.roles_data_file %>\n publish:\n roles_data: <% yaml_parse(task().result.last()) %>\n status: SUCCESS\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.list_roles\n payload:\n status: <% $.status %>\n roles_data: <% $.get('roles_data', {}) %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n list_available_networks:\n input:\n - container\n - queue_name: tripleo\n\n output:\n available_networks: <% $.available_networks %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_network_file_names:\n action: swift.get_container\n input:\n container: <% $.container %>\n publish:\n network_names: <% task().result[1].where($.name.startsWith('networks/')).where($.name.endsWith('.yaml')).name %>\n on-success: get_network_files\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_network_files:\n with-items: network_name in <% $.network_names %>\n action: swift.get_object\n on-success: transform_output\n on-error: notify_zaqar\n input:\n container: <% $.container %>\n obj: <% $.network_name %>\n publish:\n status: SUCCESS\n available_yaml_networks: <% task().result.select($[1]) %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n transform_output:\n publish:\n status: SUCCESS\n available_networks: <% yaml_parse($.available_yaml_networks.join(\"\\n\")) %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-complete: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.list_available_networks\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n available_networks: <% $.get('available_networks', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n list_networks:\n input:\n - container: 'overcloud'\n - network_data_file: 'network_data.yaml'\n - queue_name: tripleo\n\n output:\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_networks:\n action: swift.get_object\n input:\n container: <% $.container %>\n obj: <% $.network_data_file %>\n on-success: notify_zaqar\n publish:\n network_data: <% yaml_parse(task().result.last()) %>\n status: SUCCESS\n message: <% task().result %>\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.list_networks\n payload:\n status: <% $.status %>\n network_data: <% $.get('network_data', {}) %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n validate_network_files:\n description: Validate network files exist\n input:\n - container: overcloud\n - network_data\n - queue_name: tripleo\n\n output:\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_network_names:\n publish:\n network_names_lower: <% $.network_data.where($.containsKey('name_lower')).name_lower %>\n network_names: <% $.network_data.where(not $.containsKey('name_lower')).name %>\n on-success: validate_networks\n\n validate_networks:\n with-items: network in <% $.network_names_lower.concat($.network_names) %>\n action: swift.head_object\n input:\n container: <% $.container %>\n obj: network/<% $.network.toLower() %>.yaml\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.validate_network_files\n payload:\n status: <% $.status %>\n message: <% $.message %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n validate_networks:\n description: Validate network files were generated properly and exist\n input:\n - container: 'overcloud'\n - network_data_file: 'network_data.yaml'\n - queue_name: tripleo\n\n output:\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_network_data:\n workflow: list_networks\n input:\n container: <% $.container %>\n network_data_file: <% $.network_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().result.network_data %>\n on-success: validate_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error:\n notify_zaqar\n\n validate_networks:\n workflow: validate_network_files\n input:\n container: <% $.container %>\n network_data: <% $.network_data %>\n queue_name: <% $.queue_name %>\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.validate_networks\n payload:\n status: <% $.status %>\n network_data: <% $.get('network_data', {}) %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n validate_roles:\n description: Vaildate roles data exists and is parsable\n\n input:\n - container: overcloud\n - roles_data_file: 'roles_data.yaml'\n - queue_name: tripleo\n\n output:\n roles_data: <% $.roles_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_roles_data:\n workflow: list_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n roles_data: <% task().result.roles_data %>\n status: SUCCESS\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error:\n notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.validate_networks\n payload:\n status: <% $.status %>\n roles_data: <% $.get('roles_data', '') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n _validate_networks_from_roles:\n description: Internal workflow for validating a network exists from a role\n\n input:\n - container: overcloud\n - defined_networks\n - networks_in_roles\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n validate_network_in_network_data:\n publish:\n networks_found: <% $.networks_in_roles.toSet().intersect($.defined_networks.toSet()) %>\n networks_not_found: <% $.networks_in_roles.toSet().difference($.defined_networks.toSet()) %>\n on-success:\n - network_not_found: <% $.networks_not_found %>\n - notify_zaqar: <% not $.networks_not_found %>\n\n network_not_found:\n publish:\n message: <% \"Some networks in roles are not defined, {0}\".format($.networks_not_found.join(', ')) %>\n status: FAILED\n on-success: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1._validate_networks_from_role\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n validate_roles_and_networks:\n description: Vaidate that roles and network data are valid\n\n input:\n - container: overcloud\n - roles_data_file: 'roles_data.yaml'\n - network_data_file: 'network_data.yaml'\n - queue_name: tripleo\n\n output:\n roles_data: <% $.roles_data %>\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n validate_network_data:\n workflow: validate_networks\n input:\n container: <% $.container %>\n network_data_file: <% $.network_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().result.network_data %>\n on-success: validate_roles_data\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n validate_roles_data:\n workflow: validate_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n roles_data: <% task().result.roles_data %>\n role_networks_data: <% task().result.roles_data.networks %>\n networks_in_roles: <% task().result.roles_data.networks.flatten().distinct() %>\n on-success: validate_roles_and_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n validate_roles_and_networks:\n workflow: _validate_networks_from_roles\n input:\n container: <% $.container %>\n defined_networks: <% $.network_data.name %>\n networks_in_roles: <% $.networks_in_roles %>\n queue_name: <% $.queue_name %>\n publish:\n status: SUCCESS\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result.message %>\n on-error: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.validate_roles_and_networks\n payload:\n status: <% $.status %>\n roles_data: <% $.get('roles_data', {}) %>\n network_data: <% $.get('network_data', {}) %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n list_available_roles:\n input:\n - container: overcloud\n - queue_name: tripleo\n\n output:\n available_roles: <% $.available_roles %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_role_file_names:\n action: swift.get_container\n input:\n container: <% $.container %>\n publish:\n role_names: <% task().result[1].where($.name.startsWith('roles/')).where($.name.endsWith('.yaml')).name %>\n on-success: get_role_files\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_role_files:\n with-items: role_name in <% $.role_names %>\n action: swift.get_object\n on-success: transform_output\n on-error: notify_zaqar\n input:\n container: <% $.container %>\n obj: <% $.role_name %>\n publish:\n status: SUCCESS\n available_yaml_roles: <% task().result.select($[1]) %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n transform_output:\n publish:\n status: SUCCESS\n available_roles: <% yaml_parse($.available_yaml_roles.join(\"\\n\")) %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-complete: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.list_available_roles\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n available_roles: <% $.get('available_roles', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n update_roles:\n description: >\n takes data in json format validates its contents and persists them in\n roles_data.yaml, after successful update, templates are regenerated.\n input:\n - container\n - roles\n - roles_data_file: 'roles_data.yaml'\n - replace_all: false\n - queue_name: tripleo\n tags:\n - tripleo-common-managed\n tasks:\n get_available_roles:\n workflow: list_available_roles\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name%>\n publish:\n available_roles: <% task().result.available_roles %>\n on-success: validate_input\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n validate_input:\n description: >\n validate the format of input (verify that each role in input has the\n required attributes set. check README in roles directory in t-h-t),\n validate that roles in input exist in roles directory in t-h-t\n action: tripleo.plan.validate_roles\n input:\n container: <% $.container %>\n roles: <% $.roles %>\n available_roles: <% $.available_roles %>\n on-success: get_network_data\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_network_data:\n workflow: list_networks\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().result.network_data %>\n on-success: validate_network_names\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n validate_network_names:\n description: >\n validate that Network names assigned to Role exist in\n network-data.yaml object in Swift container\n workflow: _validate_networks_from_roles\n input:\n container: <% $.container %>\n defined_networks: <% $.network_data.name %>\n networks_in_roles: <% $.roles.networks.flatten().distinct() %>\n queue_name: <% $.queue_name %>\n on-success: get_current_roles\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result.message %>\n\n get_current_roles:\n workflow: list_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n current_roles: <% task().result.roles_data %>\n on-success: update_roles_data\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n update_roles_data:\n description: >\n update roles_data.yaml object in Swift with roles from workflow input\n action: tripleo.plan.update_roles\n input:\n container: <% $.container %>\n roles: <% $.roles %>\n current_roles: <% $.current_roles %>\n replace_all: <% $.replace_all %>\n publish:\n updated_roles_data: <% task().result.roles %>\n on-success: update_roles_data_in_swift\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n update_roles_data_in_swift:\n description: >\n update roles_data.yaml object in Swift with data from workflow input\n action: swift.put_object\n input:\n container: <% $.container %>\n obj: <% $.roles_data_file %>\n contents: <% yaml_dump($.updated_roles_data) %>\n on-success: regenerate_templates\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n regenerate_templates:\n action: tripleo.templates.process container=<% $.container %>\n on-success: get_updated_roles\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_updated_roles:\n workflow: list_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n publish:\n updated_roles: <% task().result.roles_data %>\n status: SUCCESS\n on-complete: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.roles.v1.update_roles\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n updated_roles: <% $.get('updated_roles', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n select_roles:\n description: >\n takes a list of role names as input and populates roles_data.yaml in\n container in Swift with respective roles from 'roles directory'\n input:\n - container\n - role_names\n - roles_data_file: 'roles_data.yaml'\n - replace_all: true\n - queue_name: tripleo\n tags:\n - tripleo-common-managed\n tasks:\n\n get_available_roles:\n workflow: list_available_roles\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n publish:\n available_roles: <% task().result.available_roles %>\n on-success: get_current_roles\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_current_roles:\n workflow: list_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n current_roles: <% task().result.roles_data %>\n on-success: gather_roles\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n gather_roles:\n description: >\n for each role name from the input, check if it exists in\n roles_data.yaml, if yes, use that role definition, if not, get the\n role definition from roles directory. Use the gathered roles\n definitions as input to updateRolesWorkflow - this ensures\n configuration of the roles which are already in roles_data.yaml\n will not get overridden by data from roles directory\n action: tripleo.plan.gather_roles\n input:\n role_names: <% $.role_names %>\n current_roles: <% $.current_roles %>\n available_roles: <% $.available_roles %>\n publish:\n gathered_roles: <% task().result.gathered_roles %>\n on-success: call_update_roles_workflow\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n call_update_roles_workflow:\n workflow: update_roles\n input:\n container: <% $.container %>\n roles: <% $.gathered_roles %>\n roles_data_file: <% $.roles_data_file %>\n replace_all: <% $.replace_all %>\n queue_name: <% $.queue_name %>\n on-complete: notify_zaqar\n publish:\n selected_roles: <% task().result.updated_roles %>\n status: SUCCESS\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.select_roles\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n selected_roles: <% $.get('selected_roles', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1", "tags": [], "created_at": "2018-06-26 04:26:39", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "b46d9615-d893-44e9-ba9b-c30925008d15"} > >2018-06-26 09:56:40,028 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:40,030 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.support.v1 >description: TripleO support workflows > >workflows: > > collect_logs: > description: > > This workflow runs sosreport on the servers where their names match the > provided server_name input. The logs are stored in the provided sos_dir. > input: > - server_name > - sos_dir: /var/tmp/tripleo-sos > - sos_options: boot,cluster,hardware,kernel,memory,nfs,openstack,packagemanager,performance,services,storage,system,webserver,virt > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > collect_logs_on_servers: > workflow: tripleo.deployment.v1.deploy_on_servers > on-success: send_message > on-error: set_collect_logs_on_servers_failed > input: > server_name: <% $.server_name %> > config_name: 'run_sosreport' > config: | > #!/bin/bash > mkdir -p <% $.sos_dir %> > sosreport --batch \ > -p <% $.sos_options %> \ > --tmp-dir <% $.sos_dir %> > > set_collect_logs_on_servers_failed: > on-complete: > - send_message > publish: > type: tripleo.deployment.v1.fetch_logs > status: FAILED > message: <% task().result %> > > # status messaging > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: <% $.get('type', 'tripleo.support.v1.collect_logs') %> > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = 'FAILED' %> > > upload_logs: > description: > > This workflow uploads the sosreport files stored in the provide sos_dir > on the provided host (server_uuid) to a swift container on the undercloud > input: > - server_uuid > - server_name > - container > - sos_dir: /var/tmp/tripleo-sos > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > # actions > get_swift_information: > action: tripleo.swift.swift_information > on-success: do_log_upload > on-error: set_get_swift_information_failed > input: > container: <% $.container %> > publish: > container_url: <% task().result.container_url %> > auth_key: <% task().result.auth_key %> > > set_get_swift_information_failed: > on-complete: > - send_message > publish: > status: FAILED > message: <% task(get_swift_information).result %> > > do_log_upload: > action: tripleo.deployment.config > on-success: send_message > on-error: set_do_log_upload_failed > input: > server_id: <% $.server_uuid %> > name: "upload_logs" > config: | > #!/bin/bash > CONTAINER_URL="<% $.container_url %>" > TOKEN="<% $.auth_key %>" > SOS_DIR="<% $.sos_dir %>" > for FILE in $(find $SOS_DIR -type f); do > FILENAME=$(basename $FILE) > curl -X PUT -i -H "X-Auth-Token: $TOKEN" -T $FILE $CONTAINER_URL/$FILENAME > if [ $? -eq 0 ]; then > rm -f $FILE > fi > done > group: "script" > publish: > message: "Uploaded logs from <% $.server_name %>" > > set_do_log_upload_failed: > on-complete: > - send_message > publish: > status: FAILED > message: <% tag(do_log_upload).result %> > > # status messaging > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: <% $.get('type', 'tripleo.support.v1.upload_logs') %> > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = 'FAILED' %> > > create_container: > description: > > This work flow is used to check if the container exists and creates it > if it does not exist. > input: > - container > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > check_container: > action: swift.head_container container=<% $.container %> > on-success: send_message > on-error: create_container > > create_container: > action: swift.put_container > input: > container: <% $.container %> > headers: > x-container-meta-usage-tripleo: support > on-success: send_message > on-error: set_create_container_failed > > set_create_container_failed: > on-complete: > - send_message > publish: > type: tripleo.support.v1.create_container.create_container > status: FAILED > message: <% task(create_container).result %> > > # status messaging > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: <% $.get('type', 'tripleo.support.v1.create_container') %> > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = 'FAILED' %> > > delete_container: > description: > > This workflow deletes all the objects in a provided swift container and > then removes the container itself from the undercloud. > input: > - container > - concurrency: 5 > - timeout: 900 > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > # actions > check_container: > action: swift.head_container container=<% $.container %> > on-success: list_objects > on-error: set_check_container_failure > > set_check_container_failure: > on-complete: send_message > publish: > status: FAILED > type: tripleo.support.v1.delete_container.check_container > message: <% task(check_container).result %> > > list_objects: > action: swift.get_container container=<% $.container %> > on-success: delete_objects > on-error: set_list_objects_failure > publish: > log_objects: <% task().result[1] %> > > set_list_objects_failure: > on-complete: send_message > publish: > status: FAILED > type: tripleo.support.v1.delete_container.list_objects > message: <% task(list_objects).result %> > > delete_objects: > action: swift.delete_object > concurrency: <% $.concurrency %> > timeout: <% $.timeout %> > with-items: object in <% $.log_objects %> > input: > container: <% $.container %> > obj: <% $.object.name %> > on-success: remove_container > on-error: set_delete_objects_failure > > set_delete_objects_failure: > on-complete: send_message > publish: > status: FAILED > type: tripleo.support.v1.delete_container.delete_objects > message: <% task(delete_objects).result %> > > remove_container: > action: swift.delete_container container=<% $.container %> > on-success: send_message > on-error: set_remove_container_failure > > set_remove_container_failure: > on-complete: send_message > publish: > status: FAILED > type: tripleo.support.v1.delete_container.remove_container > message: <% task(remove_container).result %> > > # status messaging > send_message: > action: zaqar.queue_post > wait-before: 5 > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: <% $.get('type', 'tripleo.support.v1.delete_container') %> > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = 'FAILED' %> > > fetch_logs: > description: > > This workflow creates a container on the undercloud, executes the log > collection on the servers whose names match the provided server_name, and > executes the log upload process on all the servers to the container on > the undercloud. > input: > - server_name > - container > - concurrency: 5 > - timeout: 1800 > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > # actions > create_container: > workflow: tripleo.support.v1.create_container > on-success: get_servers_matching > on-error: set_create_container_failed > input: > container: <% $.container %> > queue_name: <% $.queue_name %> > > set_create_container_failed: > on-complete: send_message > publish: > type: tripleo.support.v1.fetch_logs.create_container > status: FAILED > message: <% task(create_container).result %> > > get_servers_matching: > action: nova.servers_list > on-success: collect_logs_on_servers > publish: > servers_with_name: <% task().result._info.where($.name.indexOf(execution().input.server_name) > -1) %> > > collect_logs_on_servers: > workflow: tripleo.support.v1.collect_logs > timeout: <% $.timeout %> > on-success: upload_logs_on_servers > on-error: set_collect_logs_on_servers_failed > input: > server_name: <% $.server_name %> > queue_name: <% $.queue_name %> > > set_collect_logs_on_servers_failed: > on-complete: send_message > publish: > type: tripleo.support.v1.fetch_logs.collect_logs_on_servers > status: FAILED > message: <% task(collect_logs_on_servers).result %> > > upload_logs_on_servers: > on-success: send_message > on-error: set_upload_logs_on_servers_failed > with-items: server in <% $.servers_with_name %> > concurrency: <% $.concurrency %> > workflow: tripleo.support.v1.upload_logs > input: > server_name: <% $.server.name %> > server_uuid: <% $.server.id %> > container: <% $.container %> > queue_name: <% $.queue_name %> > > set_upload_logs_on_servers_failed: > on-complete: send_message > publish: > type: tripleo.support.v1.fetch_logs.upload_logs > status: FAILED > message: <% task(upload_logs_on_servers).result %> > > # status messaging > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: <% $.get('type', 'tripleo.support.v1.fetch_logs') %> > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = 'FAILED' %> >' >2018-06-26 09:56:40,885 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 12000 >2018-06-26 09:56:40,886 DEBUG: RESP: [201] Content-Length: 12000 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:40 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.support.v1\ndescription: TripleO support workflows\n\nworkflows:\n\n collect_logs:\n description: >\n This workflow runs sosreport on the servers where their names match the\n provided server_name input. The logs are stored in the provided sos_dir.\n input:\n - server_name\n - sos_dir: /var/tmp/tripleo-sos\n - sos_options: boot,cluster,hardware,kernel,memory,nfs,openstack,packagemanager,performance,services,storage,system,webserver,virt\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n collect_logs_on_servers:\n workflow: tripleo.deployment.v1.deploy_on_servers\n on-success: send_message\n on-error: set_collect_logs_on_servers_failed\n input:\n server_name: <% $.server_name %>\n config_name: 'run_sosreport'\n config: |\n #!/bin/bash\n mkdir -p <% $.sos_dir %>\n sosreport --batch \\\n -p <% $.sos_options %> \\\n --tmp-dir <% $.sos_dir %>\n\n set_collect_logs_on_servers_failed:\n on-complete:\n - send_message\n publish:\n type: tripleo.deployment.v1.fetch_logs\n status: FAILED\n message: <% task().result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.collect_logs') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n\n upload_logs:\n description: >\n This workflow uploads the sosreport files stored in the provide sos_dir\n on the provided host (server_uuid) to a swift container on the undercloud\n input:\n - server_uuid\n - server_name\n - container\n - sos_dir: /var/tmp/tripleo-sos\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n # actions\n get_swift_information:\n action: tripleo.swift.swift_information\n on-success: do_log_upload\n on-error: set_get_swift_information_failed\n input:\n container: <% $.container %>\n publish:\n container_url: <% task().result.container_url %>\n auth_key: <% task().result.auth_key %>\n\n set_get_swift_information_failed:\n on-complete:\n - send_message\n publish:\n status: FAILED\n message: <% task(get_swift_information).result %>\n\n do_log_upload:\n action: tripleo.deployment.config\n on-success: send_message\n on-error: set_do_log_upload_failed\n input:\n server_id: <% $.server_uuid %>\n name: \"upload_logs\"\n config: |\n #!/bin/bash\n CONTAINER_URL=\"<% $.container_url %>\"\n TOKEN=\"<% $.auth_key %>\"\n SOS_DIR=\"<% $.sos_dir %>\"\n for FILE in $(find $SOS_DIR -type f); do\n FILENAME=$(basename $FILE)\n curl -X PUT -i -H \"X-Auth-Token: $TOKEN\" -T $FILE $CONTAINER_URL/$FILENAME\n if [ $? -eq 0 ]; then\n rm -f $FILE\n fi\n done\n group: \"script\"\n publish:\n message: \"Uploaded logs from <% $.server_name %>\"\n\n set_do_log_upload_failed:\n on-complete:\n - send_message\n publish:\n status: FAILED\n message: <% tag(do_log_upload).result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.upload_logs') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n\n create_container:\n description: >\n This work flow is used to check if the container exists and creates it\n if it does not exist.\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_container:\n action: swift.head_container container=<% $.container %>\n on-success: send_message\n on-error: create_container\n\n create_container:\n action: swift.put_container\n input:\n container: <% $.container %>\n headers:\n x-container-meta-usage-tripleo: support\n on-success: send_message\n on-error: set_create_container_failed\n\n set_create_container_failed:\n on-complete:\n - send_message\n publish:\n type: tripleo.support.v1.create_container.create_container\n status: FAILED\n message: <% task(create_container).result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.create_container') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n\n delete_container:\n description: >\n This workflow deletes all the objects in a provided swift container and\n then removes the container itself from the undercloud.\n input:\n - container\n - concurrency: 5\n - timeout: 900\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n # actions\n check_container:\n action: swift.head_container container=<% $.container %>\n on-success: list_objects\n on-error: set_check_container_failure\n\n set_check_container_failure:\n on-complete: send_message\n publish:\n status: FAILED\n type: tripleo.support.v1.delete_container.check_container\n message: <% task(check_container).result %>\n\n list_objects:\n action: swift.get_container container=<% $.container %>\n on-success: delete_objects\n on-error: set_list_objects_failure\n publish:\n log_objects: <% task().result[1] %>\n\n set_list_objects_failure:\n on-complete: send_message\n publish:\n status: FAILED\n type: tripleo.support.v1.delete_container.list_objects\n message: <% task(list_objects).result %>\n\n delete_objects:\n action: swift.delete_object\n concurrency: <% $.concurrency %>\n timeout: <% $.timeout %>\n with-items: object in <% $.log_objects %>\n input:\n container: <% $.container %>\n obj: <% $.object.name %>\n on-success: remove_container\n on-error: set_delete_objects_failure\n\n set_delete_objects_failure:\n on-complete: send_message\n publish:\n status: FAILED\n type: tripleo.support.v1.delete_container.delete_objects\n message: <% task(delete_objects).result %>\n\n remove_container:\n action: swift.delete_container container=<% $.container %>\n on-success: send_message\n on-error: set_remove_container_failure\n\n set_remove_container_failure:\n on-complete: send_message\n publish:\n status: FAILED\n type: tripleo.support.v1.delete_container.remove_container\n message: <% task(remove_container).result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n wait-before: 5\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.delete_container') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n\n fetch_logs:\n description: >\n This workflow creates a container on the undercloud, executes the log\n collection on the servers whose names match the provided server_name, and\n executes the log upload process on all the servers to the container on\n the undercloud.\n input:\n - server_name\n - container\n - concurrency: 5\n - timeout: 1800\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n # actions\n create_container:\n workflow: tripleo.support.v1.create_container\n on-success: get_servers_matching\n on-error: set_create_container_failed\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n\n set_create_container_failed:\n on-complete: send_message\n publish:\n type: tripleo.support.v1.fetch_logs.create_container\n status: FAILED\n message: <% task(create_container).result %>\n\n get_servers_matching:\n action: nova.servers_list\n on-success: collect_logs_on_servers\n publish:\n servers_with_name: <% task().result._info.where($.name.indexOf(execution().input.server_name) > -1) %>\n\n collect_logs_on_servers:\n workflow: tripleo.support.v1.collect_logs\n timeout: <% $.timeout %>\n on-success: upload_logs_on_servers\n on-error: set_collect_logs_on_servers_failed\n input:\n server_name: <% $.server_name %>\n queue_name: <% $.queue_name %>\n\n set_collect_logs_on_servers_failed:\n on-complete: send_message\n publish:\n type: tripleo.support.v1.fetch_logs.collect_logs_on_servers\n status: FAILED\n message: <% task(collect_logs_on_servers).result %>\n\n upload_logs_on_servers:\n on-success: send_message\n on-error: set_upload_logs_on_servers_failed\n with-items: server in <% $.servers_with_name %>\n concurrency: <% $.concurrency %>\n workflow: tripleo.support.v1.upload_logs\n input:\n server_name: <% $.server.name %>\n server_uuid: <% $.server.id %>\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n\n set_upload_logs_on_servers_failed:\n on-complete: send_message\n publish:\n type: tripleo.support.v1.fetch_logs.upload_logs\n status: FAILED\n message: <% task(upload_logs_on_servers).result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.fetch_logs') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n", "name": "tripleo.support.v1", "tags": [], "created_at": "2018-06-26 04:26:40", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "915974cf-0e50-4424-b010-ad2cb987e99b"} > >2018-06-26 09:56:40,887 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:40,888 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.deployment.v1 >description: TripleO deployment workflows > >workflows: > > deploy_on_server: > > input: > - server_uuid > - server_name > - config > - config_name > - group > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > deploy_config: > action: tripleo.deployment.config > on-complete: send_message > input: > server_id: <% $.server_uuid %> > name: <% $.config_name %> > config: <% $.config %> > group: <% $.group %> > publish: > stdout: <% task().result.deploy_stdout %> > stderr: <% task().result.deploy_stderr %> > status_code: <% task().result.deploy_status_code %> > publish-on-error: > status: FAILED > message: <% task().result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.deployment.v1.deploy_on_server > payload: > status: <% $.get("status", "SUCCESS") %> > message: <% $.get("message", "") %> > server_uuid: <% $.server_uuid %> > server_name: <% $.server_name %> > config_name: <% $.config_name %> > status_code: <% $.get("status_code", "") %> > stdout: <% $.get("stdout", "") %> > stderr: <% $.get("stderr", "") %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > deploy_on_servers: > > input: > - server_name > - config_name > - config > - group: script > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > check_if_all_servers: > on-success: > - get_servers_matching: <% $.server_name != "all" %> > - get_all_servers: <% $.server_name = "all" %> > > get_servers_matching: > action: nova.servers_list > on-success: deploy_on_servers > publish: > servers_with_name: <% task().result._info.where($.name.indexOf(execution().input.server_name) > -1) %> > > get_all_servers: > action: nova.servers_list > on-success: deploy_on_servers > publish: > servers_with_name: <% task().result._info %> > > deploy_on_servers: > on-success: send_success_message > on-error: send_failed_message > with-items: server in <% $.servers_with_name %> > workflow: tripleo.deployment.v1.deploy_on_server > input: > server_name: <% $.server.name %> > server_uuid: <% $.server.id %> > config: <% $.config %> > config_name: <% $.config_name %> > group: <% $.group %> > queue_name: <% $.queue_name %> > > send_success_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.deployment.v1.deploy_on_servers > payload: > status: SUCCESS > execution: <% execution() %> > > send_failed_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.deployment.v1.deploy_on_servers > payload: > status: FAILED > message: <% task(deploy_on_servers).result %> > execution: <% execution() %> > on-success: fail > > deploy_plan: > > description: > > Deploy the overcloud for a plan. > > input: > - container > - run_validations: False > - timeout: 240 > - skip_deploy_identifier: False > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > add_validation_ssh_key: > workflow: tripleo.validations.v1.add_validation_ssh_key_parameter > input: > container: <% $.container %> > queue_name: <% $.queue_name %> > on-complete: > - run_validations: <% $.run_validations %> > - create_swift_rings_backup_plan: <% not $.run_validations %> > > run_validations: > workflow: tripleo.validations.v1.run_groups > input: > group_names: > - 'pre-deployment' > plan: <% $.container %> > queue_name: <% $.queue_name %> > on-success: create_swift_rings_backup_plan > on-error: set_validations_failed > > set_validations_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(run_validations).result %> > > create_swift_rings_backup_plan: > workflow: tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan > on-success: cell_v2_discover_hosts > on-error: create_swift_rings_backup_plan_set_status_failed > input: > container: <% $.container %> > queue_name: <% $.queue_name %> > use_default_templates: true > > cell_v2_discover_hosts: > on-success: deploy > on-error: cell_v2_discover_hosts_failed > action: tripleo.baremetal.cell_v2_discover_hosts > > cell_v2_discover_hosts_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(cell_v2_discover_hosts).result %> > > deploy: > action: tripleo.deployment.deploy > input: > timeout: <% $.timeout %> > container: <% $.container %> > skip_deploy_identifier: <% $.skip_deploy_identifier %> > on-success: send_message > on-error: set_deployment_failed > > create_swift_rings_backup_plan_set_status_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(create_swift_rings_backup_plan).result %> > > set_deployment_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(deploy).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.deployment.v1.deploy_plan > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > get_horizon_url: > > description: > > Retrieve the Horizon URL from the Overcloud stack. > > input: > - stack: overcloud > - queue_name: tripleo > > tags: > - tripleo-common-managed > > output: > horizon_url: <% $.horizon_url %> > > tasks: > get_horizon_url: > action: heat.stacks_get > input: > stack_id: <% $.stack %> > publish: > horizon_url: <% task().result.outputs.where($.output_key = "EndpointMap").output_value.HorizonPublic.uri.single() %> > on-success: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > notify_zaqar: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.deployment.v1.get_horizon_url > payload: > horizon_url: <% $.get('horizon_url', '') %> > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > config_download_deploy: > > description: > > Configure the overcloud with config-download. > > input: > - timeout: 240 > - queue_name: tripleo > - plan_name: overcloud > - work_dir: /var/lib/mistral > - verbosity: 1 > > tags: > - tripleo-common-managed > > tasks: > > get_config: > action: tripleo.config.get_overcloud_config > input: > container: <% $.get('plan_name') %> > on-success: download_config > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > download_config: > action: tripleo.config.download_config > input: > work_dir: <% $.get('work_dir') %>/<% execution().id %> > on-success: send_msg_config_download > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > send_msg_config_download: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.deployment.v1.config_download > payload: > status: <% $.get('status', 'RUNNING') %> > message: Config downloaded at <% $.get('work_dir') %>/<% execution().id %> > execution: <% execution() %> > on-success: get_private_key > > get_private_key: > action: tripleo.validations.get_privkey > publish: > private_key: <% task().result %> > on-success: generate_inventory > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > generate_inventory: > action: tripleo.ansible-generate-inventory > input: > ansible_ssh_user: tripleo-admin > work_dir: <% $.get('work_dir') %>/<% execution().id %> > plan_name: <% $.get('plan_name') %> > publish: > inventory: <% task().result %> > on-success: send_msg_generate_inventory > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > send_msg_generate_inventory: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.deployment.v1.config_download > payload: > status: <% $.get('status', 'RUNNING') %> > message: Inventory generated at <% $.get('inventory') %> > execution: <% execution() %> > on-success: send_msg_run_ansible > > send_msg_run_ansible: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.deployment.v1.config_download > payload: > status: <% $.get('status', 'RUNNING') %> > message: > > Running ansible playbook at <% $.get('work_dir') %>/<% execution().id %>/deploy_steps_playbook.yaml. > See log file at <% $.get('work_dir') %>/<% execution().id %>/ansible.log for progress. > ... > execution: <% execution() %> > on-success: run_ansible > > run_ansible: > action: tripleo.ansible-playbook > input: > inventory: <% $.inventory %> > playbook: <% $.get('work_dir') %>/<% execution().id %>/deploy_steps_playbook.yaml > remote_user: tripleo-admin > ssh_extra_args: '-o StrictHostKeyChecking=no' > ssh_private_key: <% $.private_key %> > use_openstack_credentials: true > verbosity: <% $.get('verbosity') %> > become: true > timeout: <% $.timeout %> > work_dir: <% $.get('work_dir') %>/<% execution().id %> > queue_name: <% $.queue_name %> > reproduce_command: true > trash_output: true > publish: > log_path: <% task(run_ansible).result.get('log_path') %> > on-success: > - ansible_passed: <% task().result.returncode = 0 %> > - ansible_failed: <% task().result.returncode != 0 %> > on-error: send_message > publish-on-error: > status: FAILED > message: Ansible failed, check log at <% $.get('work_dir') %>/<% execution().id %>/ansible.log. > > ansible_passed: > on-success: send_message > publish: > status: SUCCESS > message: Ansible passed. > > ansible_failed: > on-success: send_message > publish: > status: FAILED > message: Ansible failed, check log at <% $.get('work_dir') %>/<% execution().id %>/ansible.log. > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.deployment.v1.config_download > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 09:56:41,557 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 13556 >2018-06-26 09:56:41,559 DEBUG: RESP: [201] Content-Length: 13556 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:41 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.deployment.v1\ndescription: TripleO deployment workflows\n\nworkflows:\n\n deploy_on_server:\n\n input:\n - server_uuid\n - server_name\n - config\n - config_name\n - group\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n deploy_config:\n action: tripleo.deployment.config\n on-complete: send_message\n input:\n server_id: <% $.server_uuid %>\n name: <% $.config_name %>\n config: <% $.config %>\n group: <% $.group %>\n publish:\n stdout: <% task().result.deploy_stdout %>\n stderr: <% task().result.deploy_stderr %>\n status_code: <% task().result.deploy_status_code %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.deploy_on_server\n payload:\n status: <% $.get(\"status\", \"SUCCESS\") %>\n message: <% $.get(\"message\", \"\") %>\n server_uuid: <% $.server_uuid %>\n server_name: <% $.server_name %>\n config_name: <% $.config_name %>\n status_code: <% $.get(\"status_code\", \"\") %>\n stdout: <% $.get(\"stdout\", \"\") %>\n stderr: <% $.get(\"stderr\", \"\") %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n deploy_on_servers:\n\n input:\n - server_name\n - config_name\n - config\n - group: script\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n check_if_all_servers:\n on-success:\n - get_servers_matching: <% $.server_name != \"all\" %>\n - get_all_servers: <% $.server_name = \"all\" %>\n\n get_servers_matching:\n action: nova.servers_list\n on-success: deploy_on_servers\n publish:\n servers_with_name: <% task().result._info.where($.name.indexOf(execution().input.server_name) > -1) %>\n\n get_all_servers:\n action: nova.servers_list\n on-success: deploy_on_servers\n publish:\n servers_with_name: <% task().result._info %>\n\n deploy_on_servers:\n on-success: send_success_message\n on-error: send_failed_message\n with-items: server in <% $.servers_with_name %>\n workflow: tripleo.deployment.v1.deploy_on_server\n input:\n server_name: <% $.server.name %>\n server_uuid: <% $.server.id %>\n config: <% $.config %>\n config_name: <% $.config_name %>\n group: <% $.group %>\n queue_name: <% $.queue_name %>\n\n send_success_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.deploy_on_servers\n payload:\n status: SUCCESS\n execution: <% execution() %>\n\n send_failed_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.deploy_on_servers\n payload:\n status: FAILED\n message: <% task(deploy_on_servers).result %>\n execution: <% execution() %>\n on-success: fail\n\n deploy_plan:\n\n description: >\n Deploy the overcloud for a plan.\n\n input:\n - container\n - run_validations: False\n - timeout: 240\n - skip_deploy_identifier: False\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n add_validation_ssh_key:\n workflow: tripleo.validations.v1.add_validation_ssh_key_parameter\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n on-complete:\n - run_validations: <% $.run_validations %>\n - create_swift_rings_backup_plan: <% not $.run_validations %>\n\n run_validations:\n workflow: tripleo.validations.v1.run_groups\n input:\n group_names:\n - 'pre-deployment'\n plan: <% $.container %>\n queue_name: <% $.queue_name %>\n on-success: create_swift_rings_backup_plan\n on-error: set_validations_failed\n\n set_validations_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(run_validations).result %>\n\n create_swift_rings_backup_plan:\n workflow: tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan\n on-success: cell_v2_discover_hosts\n on-error: create_swift_rings_backup_plan_set_status_failed\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n use_default_templates: true\n\n cell_v2_discover_hosts:\n on-success: deploy\n on-error: cell_v2_discover_hosts_failed\n action: tripleo.baremetal.cell_v2_discover_hosts\n\n cell_v2_discover_hosts_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(cell_v2_discover_hosts).result %>\n\n deploy:\n action: tripleo.deployment.deploy\n input:\n timeout: <% $.timeout %>\n container: <% $.container %>\n skip_deploy_identifier: <% $.skip_deploy_identifier %>\n on-success: send_message\n on-error: set_deployment_failed\n\n create_swift_rings_backup_plan_set_status_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(create_swift_rings_backup_plan).result %>\n\n set_deployment_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(deploy).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.deploy_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n get_horizon_url:\n\n description: >\n Retrieve the Horizon URL from the Overcloud stack.\n\n input:\n - stack: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n output:\n horizon_url: <% $.horizon_url %>\n\n tasks:\n get_horizon_url:\n action: heat.stacks_get\n input:\n stack_id: <% $.stack %>\n publish:\n horizon_url: <% task().result.outputs.where($.output_key = \"EndpointMap\").output_value.HorizonPublic.uri.single() %>\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.get_horizon_url\n payload:\n horizon_url: <% $.get('horizon_url', '') %>\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n config_download_deploy:\n\n description: >\n Configure the overcloud with config-download.\n\n input:\n - timeout: 240\n - queue_name: tripleo\n - plan_name: overcloud\n - work_dir: /var/lib/mistral\n - verbosity: 1\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_config:\n action: tripleo.config.get_overcloud_config\n input:\n container: <% $.get('plan_name') %>\n on-success: download_config\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n download_config:\n action: tripleo.config.download_config\n input:\n work_dir: <% $.get('work_dir') %>/<% execution().id %>\n on-success: send_msg_config_download\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n send_msg_config_download:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.config_download\n payload:\n status: <% $.get('status', 'RUNNING') %>\n message: Config downloaded at <% $.get('work_dir') %>/<% execution().id %>\n execution: <% execution() %>\n on-success: get_private_key\n\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: generate_inventory\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n generate_inventory:\n action: tripleo.ansible-generate-inventory\n input:\n ansible_ssh_user: tripleo-admin\n work_dir: <% $.get('work_dir') %>/<% execution().id %>\n plan_name: <% $.get('plan_name') %>\n publish:\n inventory: <% task().result %>\n on-success: send_msg_generate_inventory\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n send_msg_generate_inventory:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.config_download\n payload:\n status: <% $.get('status', 'RUNNING') %>\n message: Inventory generated at <% $.get('inventory') %>\n execution: <% execution() %>\n on-success: send_msg_run_ansible\n\n send_msg_run_ansible:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.config_download\n payload:\n status: <% $.get('status', 'RUNNING') %>\n message: >\n Running ansible playbook at <% $.get('work_dir') %>/<% execution().id %>/deploy_steps_playbook.yaml.\n See log file at <% $.get('work_dir') %>/<% execution().id %>/ansible.log for progress.\n ...\n execution: <% execution() %>\n on-success: run_ansible\n\n run_ansible:\n action: tripleo.ansible-playbook\n input:\n inventory: <% $.inventory %>\n playbook: <% $.get('work_dir') %>/<% execution().id %>/deploy_steps_playbook.yaml\n remote_user: tripleo-admin\n ssh_extra_args: '-o StrictHostKeyChecking=no'\n ssh_private_key: <% $.private_key %>\n use_openstack_credentials: true\n verbosity: <% $.get('verbosity') %>\n become: true\n timeout: <% $.timeout %>\n work_dir: <% $.get('work_dir') %>/<% execution().id %>\n queue_name: <% $.queue_name %>\n reproduce_command: true\n trash_output: true\n publish:\n log_path: <% task(run_ansible).result.get('log_path') %>\n on-success:\n - ansible_passed: <% task().result.returncode = 0 %>\n - ansible_failed: <% task().result.returncode != 0 %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: Ansible failed, check log at <% $.get('work_dir') %>/<% execution().id %>/ansible.log.\n\n ansible_passed:\n on-success: send_message\n publish:\n status: SUCCESS\n message: Ansible passed.\n\n ansible_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: Ansible failed, check log at <% $.get('work_dir') %>/<% execution().id %>/ansible.log.\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.config_download\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.deployment.v1", "tags": [], "created_at": "2018-06-26 04:26:41", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "152f70ca-8eab-4b38-a836-846b63b70e23"} > >2018-06-26 09:56:41,559 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:41,560 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.baremetal.v1 >description: TripleO Baremetal Workflows > >workflows: > > set_node_state: > input: > - node_uuid > - state_action > - target_state > - error_states: > # The default includes all failure states, even unused by TripleO. > - 'error' > - 'adopt failed' > - 'clean failed' > - 'deploy failed' > - 'inspect failed' > - 'rescue failed' > > tags: > - tripleo-common-managed > > tasks: > > set_provision_state: > on-success: wait_for_provision_state > on-error: set_provision_state_failed > action: ironic.node_set_provision_state node_uuid=<% $.node_uuid %> state=<% $.state_action %> > > set_provision_state_failed: > publish: > message: <% task(set_provision_state).result %> > on-complete: fail > > wait_for_provision_state: > action: ironic.node_get > input: > node_id: <% $.node_uuid %> > fields: ['provision_state', 'last_error'] > timeout: 1200 #20 minutes > retry: > delay: 3 > count: 400 > continue-on: <% not task().result.provision_state in [$.target_state] + $.error_states %> > on-complete: > - state_not_reached: <% task().result.provision_state != $.target_state %> > > state_not_reached: > publish: > message: >- > Node <% $.node_uuid %> did not reach state "<% $.target_state %>", > the state is "<% task(wait_for_provision_state).result.provision_state %>", > error: <% task(wait_for_provision_state).result.last_error %> > on-complete: fail > > output-on-error: > result: <% $.message %> > > set_power_state: > input: > - node_uuid > - state_action > - target_state > - error_state: 'error' > > tags: > - tripleo-common-managed > > tasks: > > set_power_state: > on-success: wait_for_power_state > on-error: set_power_state_failed > action: ironic.node_set_power_state node_id=<% $.node_uuid %> state=<% $.state_action %> > > set_power_state_failed: > publish: > message: <% task(set_power_state).result %> > on-complete: fail > > wait_for_power_state: > action: ironic.node_get > input: > node_id: <% $.node_uuid %> > fields: ['power_state', 'last_error'] > timeout: 120 #2 minutes > retry: > delay: 6 > count: 20 > continue-on: <% not task().result.power_state in [$.target_state, $.error_state] %> > on-complete: > - state_not_reached: <% task().result.power_state != $.target_state %> > > state_not_reached: > publish: > message: >- > Node <% $.node_uuid %> did not reach power state "<% $.target_state %>", > the state is "<% task(wait_for_power_state).result.power_state %>", > error: <% task(wait_for_power_state).result.last_error %> > on-complete: fail > > output-on-error: > result: <% $.message %> > > manual_cleaning: > input: > - node_uuid > - clean_steps > - timeout: 7200 # 2 hours (cleaning can take really long) > - retry_delay: 10 > - retry_count: 720 > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > set_provision_state: > on-success: wait_for_provision_state > on-error: set_provision_state_failed > action: ironic.node_set_provision_state node_uuid=<% $.node_uuid %> state='clean' cleansteps=<% $.clean_steps %> > > set_provision_state_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(set_provision_state).result %> > > wait_for_provision_state: > on-success: send_message > action: ironic.node_get node_id=<% $.node_uuid %> > timeout: <% $.timeout %> > retry: > delay: <% $.retry_delay %> > count: <% $.retry_count %> > continue-on: <% task().result.provision_state != 'manageable' %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.manual_cleaning > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > validate_nodes: > description: Validate nodes JSON > > input: > - nodes_json > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > validate_nodes: > action: tripleo.baremetal.validate_nodes > on-success: send_message > on-error: validation_failed > input: > nodes_json: <% $.nodes_json %> > > validation_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(validate_nodes).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.validate_nodes > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > register_or_update: > description: Take nodes JSON and create nodes in a "manageable" state > > input: > - nodes_json > - remove: False > - queue_name: tripleo > - kernel_name: null > - ramdisk_name: null > - instance_boot_option: local > - initial_state: manageable > > tags: > - tripleo-common-managed > > tasks: > > validate_input: > workflow: tripleo.baremetal.v1.validate_nodes > on-success: register_or_update_nodes > on-error: validation_failed > input: > nodes_json: <% $.nodes_json %> > queue_name: <% $.queue_name %> > > validation_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(validate_input).result %> > registered_nodes: [] > > register_or_update_nodes: > action: tripleo.baremetal.register_or_update_nodes > on-success: > - set_nodes_managed: <% $.initial_state != "enroll" %> > - send_message: <% $.initial_state = "enroll" %> > on-error: set_status_failed_register_or_update_nodes > input: > nodes_json: <% $.nodes_json %> > remove: <% $.remove %> > kernel_name: <% $.kernel_name %> > ramdisk_name: <% $.ramdisk_name %> > instance_boot_option: <% $.instance_boot_option %> > publish: > registered_nodes: <% task().result %> > new_nodes: <% task().result.where($.provision_state = 'enroll') %> > > set_status_failed_register_or_update_nodes: > on-success: send_message > publish: > status: FAILED > message: <% task(register_or_update_nodes).result %> > registered_nodes: [] > > set_nodes_managed: > on-success: > - set_nodes_available: <% $.initial_state = "available" %> > - send_message: <% $.initial_state != "available" %> > on-error: set_status_failed_nodes_managed > workflow: tripleo.baremetal.v1.manage > input: > node_uuids: <% $.new_nodes.uuid %> > queue_name: <% $.queue_name %> > publish: > status: SUCCESS > message: <% $.new_nodes.len() %> node(s) successfully moved to the "manageable" state. > > set_status_failed_nodes_managed: > on-success: send_message > publish: > status: FAILED > message: <% task(set_nodes_managed).result %> > > set_nodes_available: > on-success: send_message > on-error: set_status_failed_nodes_available > workflow: tripleo.baremetal.v1.provide node_uuids=<% $.new_nodes.uuid %> queue_name=<% $.queue_name %> > publish: > status: SUCCESS > message: <% $.new_nodes.len() %> node(s) successfully moved to the "available" state. > > set_status_failed_nodes_available: > on-success: send_message > publish: > status: FAILED > message: <% task(set_nodes_available).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.register_or_update > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > registered_nodes: <% $.registered_nodes or [] %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > provide: > description: Take a list of nodes and move them to "available" > > input: > - node_uuids > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > set_nodes_available: > on-success: cell_v2_discover_hosts > on-error: set_status_failed_nodes_available > with-items: uuid in <% $.node_uuids %> > workflow: tripleo.baremetal.v1.set_node_state > input: > node_uuid: <% $.uuid %> > queue_name: <% $.queue_name %> > state_action: 'provide' > target_state: 'available' > > set_status_failed_nodes_available: > on-success: send_message > publish: > status: FAILED > message: <% task(set_nodes_available).result %> > > cell_v2_discover_hosts: > on-success: try_power_off > on-error: cell_v2_discover_hosts_failed > workflow: tripleo.baremetal.v1.cellv2_discovery > input: > node_uuids: <% $.node_uuids %> > queue_name: <% $.queue_name %> > timeout: 900 #15 minutes > retry: > delay: 30 > count: 30 > > cell_v2_discover_hosts_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(cell_v2_discover_hosts).result %> > > try_power_off: > on-success: send_message > on-error: power_off_failed > with-items: uuid in <% $.node_uuids %> > workflow: tripleo.baremetal.v1.set_power_state > input: > node_uuid: <% $.uuid %> > queue_name: <% $.queue_name %> > state_action: 'off' > target_state: 'power off' > publish: > status: SUCCESS > message: <% $.node_uuids.len() %> node(s) successfully moved to the "available" state. > > power_off_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(try_power_off).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.provide > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > provide_manageable_nodes: > description: Provide all nodes in a 'manageable' state. > > input: > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > get_manageable_nodes: > action: ironic.node_list maintenance=False associated=False > on-success: provide_manageable > on-error: set_status_failed_get_manageable_nodes > publish: > managed_nodes: <% task().result.where($.provision_state = 'manageable').uuid %> > > set_status_failed_get_manageable_nodes: > on-success: send_message > publish: > status: FAILED > message: <% task(get_manageable_nodes).result %> > > provide_manageable: > on-success: send_message > workflow: tripleo.baremetal.v1.provide > input: > node_uuids: <% $.managed_nodes %> > queue_name: <% $.queue_name %> > publish: > status: SUCCESS > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.provide_manageable_nodes > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > manage: > description: Set a list of nodes to 'manageable' state > > input: > - node_uuids > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > set_nodes_manageable: > on-success: send_message > on-error: set_status_failed_nodes_manageable > with-items: uuid in <% $.node_uuids %> > workflow: tripleo.baremetal.v1.set_node_state > input: > node_uuid: <% $.uuid %> > state_action: 'manage' > target_state: 'manageable' > error_states: > # node going back to enroll designates power credentials failure > - 'enroll' > - 'error' > > set_status_failed_nodes_manageable: > on-success: send_message > publish: > status: FAILED > message: <% task(set_nodes_manageable).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.manage > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > _introspect: > description: > > An internal workflow. The tripleo.baremetal.v1.introspect workflow > should be used for introspection. > > input: > - node_uuid > - timeout > - queue_name > > output: > result: <% task(start_introspection).result %> > > tags: > - tripleo-common-managed > > tasks: > start_introspection: > action: baremetal_introspection.introspect uuid=<% $.node_uuid %> > on-success: wait_for_introspection_to_finish > on-error: set_status_failed_start_introspection > > set_status_failed_start_introspection: > publish: > status: FAILED > message: <% task(start_introspection).result %> > introspected_nodes: [] > on-success: send_message > > wait_for_introspection_to_finish: > action: baremetal_introspection.wait_for_finish > input: > uuids: <% [$.node_uuid] %> > # The interval is 10 seconds, so divide to make the overall timeout > # in seconds correct. > max_retries: <% $.timeout / 10 %> > retry_interval: 10 > publish: > introspected_node: <% task().result.values().first() %> > status: <% bool(task().result.values().first().error) and "FAILED" or "SUCCESS" %> > publish-on-error: > status: FAILED > message: <% task().result %> > on-success: wait_for_introspection_to_finish_success > on-error: wait_for_introspection_to_finish_error > > wait_for_introspection_to_finish_success: > publish: > message: <% "Introspection of node {0} completed. Status:{1}. Errors:{2}".format($.introspected_node.uuid, $.status, $.introspected_node.error) %> > on-success: send_message > > wait_for_introspection_to_finish_error: > publish: > message: <% "Introspection of node {0} timed out.".format($.node_uuid) %> > on-success: send_message > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1._introspect > payload: > status: <% $.status %> > message: <% $.message %> > introspected_node: <% $.get('introspected_node') %> > node_uuid: <% $.node_uuid %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > introspect: > description: > > Take a list of nodes and move them through introspection. > > By default each node will attempt introspection up to 3 times (two > retries plus the initial attemp) if it fails. This behaviour can be > modified by changing the max_retry_attempts input. > > The workflow will assume the node has timed out after 20 minutes (1200 > seconds). This can be changed by passing the node_timeout input in > seconds. > > input: > - node_uuids > - run_validations: False > - queue_name: tripleo > - concurrency: 20 > - max_retry_attempts: 2 > - node_timeout: 1200 > > tags: > - tripleo-common-managed > > task-defaults: > on-error: unhandled_error > > tasks: > initialize: > publish: > introspection_attempt: 1 > on-complete: > - run_validations: <% $.run_validations %> > - introspect_nodes: <% not $.run_validations %> > > run_validations: > workflow: tripleo.validations.v1.run_groups > input: > group_names: > - 'pre-introspection' > queue_name: <% $.queue_name %> > on-success: introspect_nodes > on-error: set_validations_failed > > set_validations_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(run_validations).result %> > > introspect_nodes: > with-items: uuid in <% $.node_uuids %> > concurrency: <% $.concurrency %> > workflow: _introspect > input: > node_uuid: <% $.uuid %> > queue_name: <% $.queue_name %> > timeout: <% $.node_timeout %> > # on-error is triggered if one or more nodes failed introspection. We > # still go to get_introspection_status as it will collect the result > # for each node. Unless we hit the retry limit. > on-error: > - get_introspection_status: <% $.introspection_attempt <= $.max_retry_attempts %> > - max_retry_attempts_reached: <% $.introspection_attempt > $.max_retry_attempts %> > on-success: get_introspection_status > > get_introspection_status: > with-items: uuid in <% $.node_uuids %> > action: baremetal_introspection.get_status > input: > uuid: <% $.uuid %> > publish: > introspected_nodes: <% task().result.toDict($.uuid, $) %> > # Currently there is no way for us to ignore user introspection > # aborts. This means we will retry aborted nodes until the Ironic API > # gives us more details (error code or a boolean to show aborts etc.) > # If a node hasn't finished, we consider it to be failed. > # TODO(d0ugal): When possible, don't retry introspection of nodes > # that a user manually aborted. > failed_introspection: <% task().result.where($.finished = true and $.error != null).select($.uuid) + task().result.where($.finished = false).select($.uuid) %> > publish-on-error: > # If a node fails to start introspection, getting the status can fail. > # When that happens, the result is a string and the nodes need to be > # filtered out. > introspected_nodes: <% task().result.where(isDict($)).toDict($.uuid, $) %> > # If there was an error, the exception string we get doesn't give us > # the UUID. So we use a set difference to find the UUIDs missing in > # the results. These are then added to the failed nodes. > failed_introspection: <% ($.node_uuids.toSet() - task().result.where(isDict($)).select($.uuid).toSet()) + task().result.where(isDict($)).where($.finished = true and $.error != null).toSet() + task().result.where(isDict($)).where($.finished = false).toSet() %> > on-error: increase_attempt_counter > on-success: > - successful_introspection: <% $.failed_introspection.len() = 0 %> > - increase_attempt_counter: <% $.failed_introspection.len() > 0 %> > > increase_attempt_counter: > publish: > introspection_attempt: <% $.introspection_attempt + 1 %> > on-complete: > retry_failed_nodes > > retry_failed_nodes: > publish: > status: RUNNING > message: <% 'Retrying {0} nodes that failed introspection. Attempt {1} of {2} '.format($.failed_introspection.len(), $.introspection_attempt, $.max_retry_attempts + 1) %> > # We are about to retry, update the tracking stats. > node_uuids: <% $.failed_introspection %> > on-success: > - send_message > - introspect_nodes > > max_retry_attempts_reached: > publish: > status: FAILED > message: <% 'Retry limit reached with {0} nodes still failing introspection'.format($.failed_introspection.len()) %> > on-complete: send_message > > successful_introspection: > publish: > status: SUCCESS > message: Successfully introspected <% $.introspected_nodes.len() %> node(s). > on-complete: send_message > > unhandled_error: > publish: > status: FAILED > message: "Unhandled workflow error" > on-complete: send_message > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.introspect > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > introspected_nodes: <% $.get('introspected_nodes', []) %> > failed_introspection: <% $.get('failed_introspection', []) %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > introspect_manageable_nodes: > description: Introspect all nodes in a 'manageable' state. > > input: > - run_validations: False > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > get_manageable_nodes: > action: ironic.node_list maintenance=False associated=False > on-success: validate_nodes > on-error: set_status_failed_get_manageable_nodes > publish: > managed_nodes: <% task().result.where($.provision_state = 'manageable').uuid %> > > set_status_failed_get_manageable_nodes: > on-success: send_message > publish: > status: FAILED > message: <% task(get_manageable_nodes).result %> > > validate_nodes: > on-success: > - introspect_manageable: <% $.managed_nodes.len() > 0 %> > - set_status_failed_no_nodes: <% $.managed_nodes.len() = 0 %> > > set_status_failed_no_nodes: > on-success: send_message > publish: > status: FAILED > message: No manageable nodes to introspect. Check node states and maintenance. > > introspect_manageable: > on-success: send_message > on-error: set_status_introspect_manageable > workflow: tripleo.baremetal.v1.introspect > input: > node_uuids: <% $.managed_nodes %> > run_validations: <% $.run_validations %> > queue_name: <% $.queue_name %> > publish: > introspected_nodes: <% task().result.introspected_nodes %> > > set_status_introspect_manageable: > on-success: send_message > publish: > status: FAILED > message: <% task(introspect_manageable).result %> > introspected_nodes: [] > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.introspect_manageable_nodes > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > introspected_nodes: <% $.get('introspected_nodes', []) %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > configure: > description: Take a list of manageable nodes and update their boot configuration. > > input: > - node_uuids > - queue_name: tripleo > - kernel_name: bm-deploy-kernel > - ramdisk_name: bm-deploy-ramdisk > - instance_boot_option: null > - root_device: null > - root_device_minimum_size: 4 > - overwrite_root_device_hints: False > > tags: > - tripleo-common-managed > > tasks: > > configure_boot: > on-success: configure_root_device > on-error: set_status_failed_configure_boot > with-items: node_uuid in <% $.node_uuids %> > action: tripleo.baremetal.configure_boot node_uuid=<% $.node_uuid %> kernel_name=<% $.kernel_name %> ramdisk_name=<% $.ramdisk_name %> instance_boot_option=<% $.instance_boot_option %> > > configure_root_device: > on-success: send_message > on-error: set_status_failed_configure_root_device > with-items: node_uuid in <% $.node_uuids %> > action: tripleo.baremetal.configure_root_device node_uuid=<% $.node_uuid %> root_device=<% $.root_device %> minimum_size=<% $.root_device_minimum_size %> overwrite=<% $.overwrite_root_device_hints %> > publish: > status: SUCCESS > message: 'Successfully configured the nodes.' > > set_status_failed_configure_boot: > on-success: send_message > publish: > status: FAILED > message: <% task(configure_boot).result %> > > set_status_failed_configure_root_device: > on-success: send_message > publish: > status: FAILED > message: <% task(configure_root_device).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.configure > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > configure_manageable_nodes: > description: Update the boot configuration of all nodes in 'manageable' state. > > input: > - queue_name: tripleo > - kernel_name: 'bm-deploy-kernel' > - ramdisk_name: 'bm-deploy-ramdisk' > - instance_boot_option: null > - root_device: null > - root_device_minimum_size: 4 > - overwrite_root_device_hints: False > > tags: > - tripleo-common-managed > > tasks: > > get_manageable_nodes: > action: ironic.node_list maintenance=False associated=False > on-success: configure_manageable > on-error: set_status_failed_get_manageable_nodes > publish: > managed_nodes: <% task().result.where($.provision_state = 'manageable').uuid %> > > configure_manageable: > on-success: send_message > on-error: set_status_failed_configure_manageable > workflow: tripleo.baremetal.v1.configure > input: > node_uuids: <% $.managed_nodes %> > queue_name: <% $.queue_name %> > kernel_name: <% $.kernel_name %> > ramdisk_name: <% $.ramdisk_name %> > instance_boot_option: <% $.instance_boot_option %> > root_device: <% $.root_device %> > root_device_minimum_size: <% $.root_device_minimum_size %> > overwrite_root_device_hints: <% $.overwrite_root_device_hints %> > publish: > message: 'Manageable nodes configured successfully.' > > set_status_failed_configure_manageable: > on-success: send_message > publish: > status: FAILED > message: <% task(configure_manageable).result %> > > set_status_failed_get_manageable_nodes: > on-success: send_message > publish: > status: FAILED > message: <% task(get_manageable_nodes).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.configure_manageable_nodes > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > tag_node: > description: Tag a node with a role > input: > - node_uuid > - role: null > - queue_name: tripleo > > task-defaults: > on-error: send_message > > tags: > - tripleo-common-managed > > tasks: > > update_node: > on-success: send_message > action: tripleo.baremetal.update_node_capability node_uuid=<% $.node_uuid %> capability='profile' value=<% $.role %> > publish: > message: <% task().result %> > status: SUCCESS > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.tag_node > payload: > status: <% $.get('status', 'FAILED') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > tag_nodes: > description: Runs the tag_node workflow in a loop > input: > - tag_node_uuids > - untag_node_uuids > - role > - plan: overcloud > - queue_name: tripleo > > task-defaults: > on-error: send_message > > tags: > - tripleo-common-managed > > tasks: > > tag_nodes: > with-items: node_uuid in <% $.tag_node_uuids %> > workflow: tripleo.baremetal.v1.tag_node > input: > node_uuid: <% $.node_uuid %> > queue_name: <% $.queue_name %> > role: <% $.role %> > concurrency: 1 > on-success: untag_nodes > > untag_nodes: > with-items: node_uuid in <% $.untag_node_uuids %> > workflow: tripleo.baremetal.v1.tag_node > input: > node_uuid: <% $.node_uuid %> > queue_name: <% $.queue_name %> > concurrency: 1 > on-success: update_role_parameters > > update_role_parameters: > on-success: send_message > action: tripleo.parameters.update_role role=<% $.role %> container=<% $.plan %> > publish: > message: <% task().result %> > status: SUCCESS > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.tag_nodes > payload: > status: <% $.get('status', 'FAILED') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > nodes_with_profile: > description: Find nodes with a specific profile > input: > - profile > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > get_active_nodes: > action: ironic.node_list maintenance=false provision_state='active' detail=true > on-success: get_available_nodes > on-error: set_status_failed_get_active_nodes > > get_available_nodes: > action: ironic.node_list maintenance=false provision_state='available' detail=true > on-success: get_matching_nodes > on-error: set_status_failed_get_available_nodes > > get_matching_nodes: > with-items: node in <% task(get_available_nodes).result + task(get_active_nodes).result %> > action: tripleo.baremetal.get_profile node=<% $.node %> > on-success: send_message > on-error: set_status_failed_get_matching_nodes > publish: > matching_nodes: <% let(input_profile_name => $.profile) -> task().result.where($.profile = $input_profile_name).uuid %> > > set_status_failed_get_active_nodes: > on-success: send_message > publish: > status: FAILED > message: <% task(get_active_nodes).result %> > > set_status_failed_get_available_nodes: > on-success: send_message > publish: > status: FAILED > message: <% task(get_available_nodes).result %> > > set_status_failed_get_matching_nodes: > on-success: send_message > publish: > status: FAILED > message: <% task(get_matching_nodes).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.nodes_with_profile > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > matching_nodes: <% $.matching_nodes or [] %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > create_raid_configuration: > description: Create and apply RAID configuration for given nodes > input: > - node_uuids > - configuration > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > set_configuration: > with-items: node_uuid in <% $.node_uuids %> > action: ironic.node_set_target_raid_config node_ident=<% $.node_uuid %> target_raid_config=<% $.configuration %> > on-success: apply_configuration > on-error: set_configuration_failed > > set_configuration_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(set_configuration).result %> > > apply_configuration: > with-items: node_uuid in <% $.node_uuids %> > workflow: tripleo.baremetal.v1.manual_cleaning > input: > node_uuid: <% $.node_uuid %> > clean_steps: > - interface: raid > step: delete_configuration > - interface: raid > step: create_configuration > timeout: 1800 # building RAID should be fast than general cleaning > retry_count: 180 > retry_delay: 10 > on-success: send_message > on-error: apply_configuration_failed > publish: > message: <% task().result %> > status: SUCCESS > > apply_configuration_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(apply_configuration).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.create_raid_configuration > payload: > status: <% $.get('status', 'FAILED') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > > cellv2_discovery: > description: Run cell_v2 host discovery > > input: > - node_uuids > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > cell_v2_discover_hosts: > on-success: wait_for_nova_resources > on-error: cell_v2_discover_hosts_failed > action: tripleo.baremetal.cell_v2_discover_hosts > > cell_v2_discover_hosts_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(cell_v2_discover_hosts).result %> > > wait_for_nova_resources: > on-success: send_message > on-error: wait_for_nova_resources_failed > with-items: node_uuid in <% $.node_uuids %> > action: nova.hypervisors_find hypervisor_hostname=<% $.node_uuid %> > > wait_for_nova_resources_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(wait_for_nova_resources).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.cellv2_discovery > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > > discover_nodes: > description: Run nodes discovery over the given IP range > > input: > - ip_addresses > - credentials > - ports: [623] > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > get_all_nodes: > action: ironic.node_list > input: > fields: ["uuid", "driver", "driver_info"] > limit: 0 > on-success: get_candidate_nodes > on-error: get_all_nodes_failed > publish: > existing_nodes: <% task().result %> > > get_all_nodes_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(get_all_nodes).result %> > > get_candidate_nodes: > action: tripleo.baremetal.get_candidate_nodes > input: > ip_addresses: <% $.ip_addresses %> > credentials: <% $.credentials %> > ports: <% $.ports %> > existing_nodes: <% $.existing_nodes %> > on-success: probe_nodes > on-error: get_candidate_nodes_failed > publish: > candidates: <% task().result %> > > get_candidate_nodes_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(get_candidate_nodes).result %> > > probe_nodes: > action: tripleo.baremetal.probe_node > on-success: send_message > on-error: probe_nodes_failed > input: > ip: <% $.node.ip %> > port: <% $.node.port %> > username: <% $.node.username %> > password: <% $.node.password %> > with-items: > - node in <% $.candidates %> > publish: > nodes_json: <% task().result.where($ != null) %> > > probe_nodes_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(probe_nodes).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.discover_nodes > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > nodes_json: <% $.get('nodes_json', []) %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > discover_and_enroll_nodes: > description: Run nodes discovery over the given IP range and enroll nodes > > input: > - ip_addresses > - credentials > - ports: [623] > - kernel_name: null > - ramdisk_name: null > - instance_boot_option: local > - initial_state: manageable > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > discover_nodes: > workflow: tripleo.baremetal.v1.discover_nodes > input: > ip_addresses: <% $.ip_addresses %> > ports: <% $.ports %> > credentials: <% $.credentials %> > queue_name: <% $.queue_name %> > on-success: enroll_nodes > on-error: discover_nodes_failed > publish: > nodes_json: <% task().result.nodes_json %> > > discover_nodes_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(discover_nodes).result %> > > enroll_nodes: > workflow: tripleo.baremetal.v1.register_or_update > input: > nodes_json: <% $.nodes_json %> > kernel_name: <% $.kernel_name %> > ramdisk_name: <% $.ramdisk_name %> > instance_boot_option: <% $.instance_boot_option %> > initial_state: <% $.initial_state %> > on-success: send_message > on-error: enroll_nodes_failed > publish: > registered_nodes: <% task().result.registered_nodes %> > > enroll_nodes_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(enroll_nodes).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.discover_and_enroll_nodes > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > registered_nodes: <% $.get('registered_nodes', []) %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 09:56:44,137 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 43222 >2018-06-26 09:56:44,177 DEBUG: RESP: [201] Content-Length: 43222 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:44 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.baremetal.v1\ndescription: TripleO Baremetal Workflows\n\nworkflows:\n\n set_node_state:\n input:\n - node_uuid\n - state_action\n - target_state\n - error_states:\n # The default includes all failure states, even unused by TripleO.\n - 'error'\n - 'adopt failed'\n - 'clean failed'\n - 'deploy failed'\n - 'inspect failed'\n - 'rescue failed'\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_provision_state:\n on-success: wait_for_provision_state\n on-error: set_provision_state_failed\n action: ironic.node_set_provision_state node_uuid=<% $.node_uuid %> state=<% $.state_action %>\n\n set_provision_state_failed:\n publish:\n message: <% task(set_provision_state).result %>\n on-complete: fail\n\n wait_for_provision_state:\n action: ironic.node_get\n input:\n node_id: <% $.node_uuid %>\n fields: ['provision_state', 'last_error']\n timeout: 1200 #20 minutes\n retry:\n delay: 3\n count: 400\n continue-on: <% not task().result.provision_state in [$.target_state] + $.error_states %>\n on-complete:\n - state_not_reached: <% task().result.provision_state != $.target_state %>\n\n state_not_reached:\n publish:\n message: >-\n Node <% $.node_uuid %> did not reach state \"<% $.target_state %>\",\n the state is \"<% task(wait_for_provision_state).result.provision_state %>\",\n error: <% task(wait_for_provision_state).result.last_error %>\n on-complete: fail\n\n output-on-error:\n result: <% $.message %>\n\n set_power_state:\n input:\n - node_uuid\n - state_action\n - target_state\n - error_state: 'error'\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_power_state:\n on-success: wait_for_power_state\n on-error: set_power_state_failed\n action: ironic.node_set_power_state node_id=<% $.node_uuid %> state=<% $.state_action %>\n\n set_power_state_failed:\n publish:\n message: <% task(set_power_state).result %>\n on-complete: fail\n\n wait_for_power_state:\n action: ironic.node_get\n input:\n node_id: <% $.node_uuid %>\n fields: ['power_state', 'last_error']\n timeout: 120 #2 minutes\n retry:\n delay: 6\n count: 20\n continue-on: <% not task().result.power_state in [$.target_state, $.error_state] %>\n on-complete:\n - state_not_reached: <% task().result.power_state != $.target_state %>\n\n state_not_reached:\n publish:\n message: >-\n Node <% $.node_uuid %> did not reach power state \"<% $.target_state %>\",\n the state is \"<% task(wait_for_power_state).result.power_state %>\",\n error: <% task(wait_for_power_state).result.last_error %>\n on-complete: fail\n\n output-on-error:\n result: <% $.message %>\n\n manual_cleaning:\n input:\n - node_uuid\n - clean_steps\n - timeout: 7200 # 2 hours (cleaning can take really long)\n - retry_delay: 10\n - retry_count: 720\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_provision_state:\n on-success: wait_for_provision_state\n on-error: set_provision_state_failed\n action: ironic.node_set_provision_state node_uuid=<% $.node_uuid %> state='clean' cleansteps=<% $.clean_steps %>\n\n set_provision_state_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_provision_state).result %>\n\n wait_for_provision_state:\n on-success: send_message\n action: ironic.node_get node_id=<% $.node_uuid %>\n timeout: <% $.timeout %>\n retry:\n delay: <% $.retry_delay %>\n count: <% $.retry_count %>\n continue-on: <% task().result.provision_state != 'manageable' %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.manual_cleaning\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n validate_nodes:\n description: Validate nodes JSON\n\n input:\n - nodes_json\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n validate_nodes:\n action: tripleo.baremetal.validate_nodes\n on-success: send_message\n on-error: validation_failed\n input:\n nodes_json: <% $.nodes_json %>\n\n validation_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(validate_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.validate_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n register_or_update:\n description: Take nodes JSON and create nodes in a \"manageable\" state\n\n input:\n - nodes_json\n - remove: False\n - queue_name: tripleo\n - kernel_name: null\n - ramdisk_name: null\n - instance_boot_option: local\n - initial_state: manageable\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n validate_input:\n workflow: tripleo.baremetal.v1.validate_nodes\n on-success: register_or_update_nodes\n on-error: validation_failed\n input:\n nodes_json: <% $.nodes_json %>\n queue_name: <% $.queue_name %>\n\n validation_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(validate_input).result %>\n registered_nodes: []\n\n register_or_update_nodes:\n action: tripleo.baremetal.register_or_update_nodes\n on-success:\n - set_nodes_managed: <% $.initial_state != \"enroll\" %>\n - send_message: <% $.initial_state = \"enroll\" %>\n on-error: set_status_failed_register_or_update_nodes\n input:\n nodes_json: <% $.nodes_json %>\n remove: <% $.remove %>\n kernel_name: <% $.kernel_name %>\n ramdisk_name: <% $.ramdisk_name %>\n instance_boot_option: <% $.instance_boot_option %>\n publish:\n registered_nodes: <% task().result %>\n new_nodes: <% task().result.where($.provision_state = 'enroll') %>\n\n set_status_failed_register_or_update_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(register_or_update_nodes).result %>\n registered_nodes: []\n\n set_nodes_managed:\n on-success:\n - set_nodes_available: <% $.initial_state = \"available\" %>\n - send_message: <% $.initial_state != \"available\" %>\n on-error: set_status_failed_nodes_managed\n workflow: tripleo.baremetal.v1.manage\n input:\n node_uuids: <% $.new_nodes.uuid %>\n queue_name: <% $.queue_name %>\n publish:\n status: SUCCESS\n message: <% $.new_nodes.len() %> node(s) successfully moved to the \"manageable\" state.\n\n set_status_failed_nodes_managed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_nodes_managed).result %>\n\n set_nodes_available:\n on-success: send_message\n on-error: set_status_failed_nodes_available\n workflow: tripleo.baremetal.v1.provide node_uuids=<% $.new_nodes.uuid %> queue_name=<% $.queue_name %>\n publish:\n status: SUCCESS\n message: <% $.new_nodes.len() %> node(s) successfully moved to the \"available\" state.\n\n set_status_failed_nodes_available:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_nodes_available).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.register_or_update\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n registered_nodes: <% $.registered_nodes or [] %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n provide:\n description: Take a list of nodes and move them to \"available\"\n\n input:\n - node_uuids\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_nodes_available:\n on-success: cell_v2_discover_hosts\n on-error: set_status_failed_nodes_available\n with-items: uuid in <% $.node_uuids %>\n workflow: tripleo.baremetal.v1.set_node_state\n input:\n node_uuid: <% $.uuid %>\n queue_name: <% $.queue_name %>\n state_action: 'provide'\n target_state: 'available'\n\n set_status_failed_nodes_available:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_nodes_available).result %>\n\n cell_v2_discover_hosts:\n on-success: try_power_off\n on-error: cell_v2_discover_hosts_failed\n workflow: tripleo.baremetal.v1.cellv2_discovery\n input:\n node_uuids: <% $.node_uuids %>\n queue_name: <% $.queue_name %>\n timeout: 900 #15 minutes\n retry:\n delay: 30\n count: 30\n\n cell_v2_discover_hosts_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(cell_v2_discover_hosts).result %>\n\n try_power_off:\n on-success: send_message\n on-error: power_off_failed\n with-items: uuid in <% $.node_uuids %>\n workflow: tripleo.baremetal.v1.set_power_state\n input:\n node_uuid: <% $.uuid %>\n queue_name: <% $.queue_name %>\n state_action: 'off'\n target_state: 'power off'\n publish:\n status: SUCCESS\n message: <% $.node_uuids.len() %> node(s) successfully moved to the \"available\" state.\n\n power_off_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(try_power_off).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.provide\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n provide_manageable_nodes:\n description: Provide all nodes in a 'manageable' state.\n\n input:\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_manageable_nodes:\n action: ironic.node_list maintenance=False associated=False\n on-success: provide_manageable\n on-error: set_status_failed_get_manageable_nodes\n publish:\n managed_nodes: <% task().result.where($.provision_state = 'manageable').uuid %>\n\n set_status_failed_get_manageable_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_manageable_nodes).result %>\n\n provide_manageable:\n on-success: send_message\n workflow: tripleo.baremetal.v1.provide\n input:\n node_uuids: <% $.managed_nodes %>\n queue_name: <% $.queue_name %>\n publish:\n status: SUCCESS\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.provide_manageable_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n manage:\n description: Set a list of nodes to 'manageable' state\n\n input:\n - node_uuids\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_nodes_manageable:\n on-success: send_message\n on-error: set_status_failed_nodes_manageable\n with-items: uuid in <% $.node_uuids %>\n workflow: tripleo.baremetal.v1.set_node_state\n input:\n node_uuid: <% $.uuid %>\n state_action: 'manage'\n target_state: 'manageable'\n error_states:\n # node going back to enroll designates power credentials failure\n - 'enroll'\n - 'error'\n\n set_status_failed_nodes_manageable:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_nodes_manageable).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.manage\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n _introspect:\n description: >\n An internal workflow. The tripleo.baremetal.v1.introspect workflow\n should be used for introspection.\n\n input:\n - node_uuid\n - timeout\n - queue_name\n\n output:\n result: <% task(start_introspection).result %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n start_introspection:\n action: baremetal_introspection.introspect uuid=<% $.node_uuid %>\n on-success: wait_for_introspection_to_finish\n on-error: set_status_failed_start_introspection\n\n set_status_failed_start_introspection:\n publish:\n status: FAILED\n message: <% task(start_introspection).result %>\n introspected_nodes: []\n on-success: send_message\n\n wait_for_introspection_to_finish:\n action: baremetal_introspection.wait_for_finish\n input:\n uuids: <% [$.node_uuid] %>\n # The interval is 10 seconds, so divide to make the overall timeout\n # in seconds correct.\n max_retries: <% $.timeout / 10 %>\n retry_interval: 10\n publish:\n introspected_node: <% task().result.values().first() %>\n status: <% bool(task().result.values().first().error) and \"FAILED\" or \"SUCCESS\" %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-success: wait_for_introspection_to_finish_success\n on-error: wait_for_introspection_to_finish_error\n\n wait_for_introspection_to_finish_success:\n publish:\n message: <% \"Introspection of node {0} completed. Status:{1}. Errors:{2}\".format($.introspected_node.uuid, $.status, $.introspected_node.error) %>\n on-success: send_message\n\n wait_for_introspection_to_finish_error:\n publish:\n message: <% \"Introspection of node {0} timed out.\".format($.node_uuid) %>\n on-success: send_message\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1._introspect\n payload:\n status: <% $.status %>\n message: <% $.message %>\n introspected_node: <% $.get('introspected_node') %>\n node_uuid: <% $.node_uuid %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n introspect:\n description: >\n Take a list of nodes and move them through introspection.\n\n By default each node will attempt introspection up to 3 times (two\n retries plus the initial attemp) if it fails. This behaviour can be\n modified by changing the max_retry_attempts input.\n\n The workflow will assume the node has timed out after 20 minutes (1200\n seconds). This can be changed by passing the node_timeout input in\n seconds.\n\n input:\n - node_uuids\n - run_validations: False\n - queue_name: tripleo\n - concurrency: 20\n - max_retry_attempts: 2\n - node_timeout: 1200\n\n tags:\n - tripleo-common-managed\n\n task-defaults:\n on-error: unhandled_error\n\n tasks:\n initialize:\n publish:\n introspection_attempt: 1\n on-complete:\n - run_validations: <% $.run_validations %>\n - introspect_nodes: <% not $.run_validations %>\n\n run_validations:\n workflow: tripleo.validations.v1.run_groups\n input:\n group_names:\n - 'pre-introspection'\n queue_name: <% $.queue_name %>\n on-success: introspect_nodes\n on-error: set_validations_failed\n\n set_validations_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(run_validations).result %>\n\n introspect_nodes:\n with-items: uuid in <% $.node_uuids %>\n concurrency: <% $.concurrency %>\n workflow: _introspect\n input:\n node_uuid: <% $.uuid %>\n queue_name: <% $.queue_name %>\n timeout: <% $.node_timeout %>\n # on-error is triggered if one or more nodes failed introspection. We\n # still go to get_introspection_status as it will collect the result\n # for each node. Unless we hit the retry limit.\n on-error:\n - get_introspection_status: <% $.introspection_attempt <= $.max_retry_attempts %>\n - max_retry_attempts_reached: <% $.introspection_attempt > $.max_retry_attempts %>\n on-success: get_introspection_status\n\n get_introspection_status:\n with-items: uuid in <% $.node_uuids %>\n action: baremetal_introspection.get_status\n input:\n uuid: <% $.uuid %>\n publish:\n introspected_nodes: <% task().result.toDict($.uuid, $) %>\n # Currently there is no way for us to ignore user introspection\n # aborts. This means we will retry aborted nodes until the Ironic API\n # gives us more details (error code or a boolean to show aborts etc.)\n # If a node hasn't finished, we consider it to be failed.\n # TODO(d0ugal): When possible, don't retry introspection of nodes\n # that a user manually aborted.\n failed_introspection: <% task().result.where($.finished = true and $.error != null).select($.uuid) + task().result.where($.finished = false).select($.uuid) %>\n publish-on-error:\n # If a node fails to start introspection, getting the status can fail.\n # When that happens, the result is a string and the nodes need to be\n # filtered out.\n introspected_nodes: <% task().result.where(isDict($)).toDict($.uuid, $) %>\n # If there was an error, the exception string we get doesn't give us\n # the UUID. So we use a set difference to find the UUIDs missing in\n # the results. These are then added to the failed nodes.\n failed_introspection: <% ($.node_uuids.toSet() - task().result.where(isDict($)).select($.uuid).toSet()) + task().result.where(isDict($)).where($.finished = true and $.error != null).toSet() + task().result.where(isDict($)).where($.finished = false).toSet() %>\n on-error: increase_attempt_counter\n on-success:\n - successful_introspection: <% $.failed_introspection.len() = 0 %>\n - increase_attempt_counter: <% $.failed_introspection.len() > 0 %>\n\n increase_attempt_counter:\n publish:\n introspection_attempt: <% $.introspection_attempt + 1 %>\n on-complete:\n retry_failed_nodes\n\n retry_failed_nodes:\n publish:\n status: RUNNING\n message: <% 'Retrying {0} nodes that failed introspection. Attempt {1} of {2} '.format($.failed_introspection.len(), $.introspection_attempt, $.max_retry_attempts + 1) %>\n # We are about to retry, update the tracking stats.\n node_uuids: <% $.failed_introspection %>\n on-success:\n - send_message\n - introspect_nodes\n\n max_retry_attempts_reached:\n publish:\n status: FAILED\n message: <% 'Retry limit reached with {0} nodes still failing introspection'.format($.failed_introspection.len()) %>\n on-complete: send_message\n\n successful_introspection:\n publish:\n status: SUCCESS\n message: Successfully introspected <% $.introspected_nodes.len() %> node(s).\n on-complete: send_message\n\n unhandled_error:\n publish:\n status: FAILED\n message: \"Unhandled workflow error\"\n on-complete: send_message\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.introspect\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n introspected_nodes: <% $.get('introspected_nodes', []) %>\n failed_introspection: <% $.get('failed_introspection', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n introspect_manageable_nodes:\n description: Introspect all nodes in a 'manageable' state.\n\n input:\n - run_validations: False\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_manageable_nodes:\n action: ironic.node_list maintenance=False associated=False\n on-success: validate_nodes\n on-error: set_status_failed_get_manageable_nodes\n publish:\n managed_nodes: <% task().result.where($.provision_state = 'manageable').uuid %>\n\n set_status_failed_get_manageable_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_manageable_nodes).result %>\n\n validate_nodes:\n on-success:\n - introspect_manageable: <% $.managed_nodes.len() > 0 %>\n - set_status_failed_no_nodes: <% $.managed_nodes.len() = 0 %>\n\n set_status_failed_no_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: No manageable nodes to introspect. Check node states and maintenance.\n\n introspect_manageable:\n on-success: send_message\n on-error: set_status_introspect_manageable\n workflow: tripleo.baremetal.v1.introspect\n input:\n node_uuids: <% $.managed_nodes %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n introspected_nodes: <% task().result.introspected_nodes %>\n\n set_status_introspect_manageable:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(introspect_manageable).result %>\n introspected_nodes: []\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.introspect_manageable_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n introspected_nodes: <% $.get('introspected_nodes', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n configure:\n description: Take a list of manageable nodes and update their boot configuration.\n\n input:\n - node_uuids\n - queue_name: tripleo\n - kernel_name: bm-deploy-kernel\n - ramdisk_name: bm-deploy-ramdisk\n - instance_boot_option: null\n - root_device: null\n - root_device_minimum_size: 4\n - overwrite_root_device_hints: False\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n configure_boot:\n on-success: configure_root_device\n on-error: set_status_failed_configure_boot\n with-items: node_uuid in <% $.node_uuids %>\n action: tripleo.baremetal.configure_boot node_uuid=<% $.node_uuid %> kernel_name=<% $.kernel_name %> ramdisk_name=<% $.ramdisk_name %> instance_boot_option=<% $.instance_boot_option %>\n\n configure_root_device:\n on-success: send_message\n on-error: set_status_failed_configure_root_device\n with-items: node_uuid in <% $.node_uuids %>\n action: tripleo.baremetal.configure_root_device node_uuid=<% $.node_uuid %> root_device=<% $.root_device %> minimum_size=<% $.root_device_minimum_size %> overwrite=<% $.overwrite_root_device_hints %>\n publish:\n status: SUCCESS\n message: 'Successfully configured the nodes.'\n\n set_status_failed_configure_boot:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(configure_boot).result %>\n\n set_status_failed_configure_root_device:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(configure_root_device).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.configure\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n configure_manageable_nodes:\n description: Update the boot configuration of all nodes in 'manageable' state.\n\n input:\n - queue_name: tripleo\n - kernel_name: 'bm-deploy-kernel'\n - ramdisk_name: 'bm-deploy-ramdisk'\n - instance_boot_option: null\n - root_device: null\n - root_device_minimum_size: 4\n - overwrite_root_device_hints: False\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_manageable_nodes:\n action: ironic.node_list maintenance=False associated=False\n on-success: configure_manageable\n on-error: set_status_failed_get_manageable_nodes\n publish:\n managed_nodes: <% task().result.where($.provision_state = 'manageable').uuid %>\n\n configure_manageable:\n on-success: send_message\n on-error: set_status_failed_configure_manageable\n workflow: tripleo.baremetal.v1.configure\n input:\n node_uuids: <% $.managed_nodes %>\n queue_name: <% $.queue_name %>\n kernel_name: <% $.kernel_name %>\n ramdisk_name: <% $.ramdisk_name %>\n instance_boot_option: <% $.instance_boot_option %>\n root_device: <% $.root_device %>\n root_device_minimum_size: <% $.root_device_minimum_size %>\n overwrite_root_device_hints: <% $.overwrite_root_device_hints %>\n publish:\n message: 'Manageable nodes configured successfully.'\n\n set_status_failed_configure_manageable:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(configure_manageable).result %>\n\n set_status_failed_get_manageable_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_manageable_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.configure_manageable_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n tag_node:\n description: Tag a node with a role\n input:\n - node_uuid\n - role: null\n - queue_name: tripleo\n\n task-defaults:\n on-error: send_message\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n update_node:\n on-success: send_message\n action: tripleo.baremetal.update_node_capability node_uuid=<% $.node_uuid %> capability='profile' value=<% $.role %>\n publish:\n message: <% task().result %>\n status: SUCCESS\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.tag_node\n payload:\n status: <% $.get('status', 'FAILED') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n tag_nodes:\n description: Runs the tag_node workflow in a loop\n input:\n - tag_node_uuids\n - untag_node_uuids\n - role\n - plan: overcloud\n - queue_name: tripleo\n\n task-defaults:\n on-error: send_message\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n tag_nodes:\n with-items: node_uuid in <% $.tag_node_uuids %>\n workflow: tripleo.baremetal.v1.tag_node\n input:\n node_uuid: <% $.node_uuid %>\n queue_name: <% $.queue_name %>\n role: <% $.role %>\n concurrency: 1\n on-success: untag_nodes\n\n untag_nodes:\n with-items: node_uuid in <% $.untag_node_uuids %>\n workflow: tripleo.baremetal.v1.tag_node\n input:\n node_uuid: <% $.node_uuid %>\n queue_name: <% $.queue_name %>\n concurrency: 1\n on-success: update_role_parameters\n\n update_role_parameters:\n on-success: send_message\n action: tripleo.parameters.update_role role=<% $.role %> container=<% $.plan %>\n publish:\n message: <% task().result %>\n status: SUCCESS\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.tag_nodes\n payload:\n status: <% $.get('status', 'FAILED') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n nodes_with_profile:\n description: Find nodes with a specific profile\n input:\n - profile\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_active_nodes:\n action: ironic.node_list maintenance=false provision_state='active' detail=true\n on-success: get_available_nodes\n on-error: set_status_failed_get_active_nodes\n\n get_available_nodes:\n action: ironic.node_list maintenance=false provision_state='available' detail=true\n on-success: get_matching_nodes\n on-error: set_status_failed_get_available_nodes\n\n get_matching_nodes:\n with-items: node in <% task(get_available_nodes).result + task(get_active_nodes).result %>\n action: tripleo.baremetal.get_profile node=<% $.node %>\n on-success: send_message\n on-error: set_status_failed_get_matching_nodes\n publish:\n matching_nodes: <% let(input_profile_name => $.profile) -> task().result.where($.profile = $input_profile_name).uuid %>\n\n set_status_failed_get_active_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_active_nodes).result %>\n\n set_status_failed_get_available_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_available_nodes).result %>\n\n set_status_failed_get_matching_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_matching_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.nodes_with_profile\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n matching_nodes: <% $.matching_nodes or [] %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n create_raid_configuration:\n description: Create and apply RAID configuration for given nodes\n input:\n - node_uuids\n - configuration\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_configuration:\n with-items: node_uuid in <% $.node_uuids %>\n action: ironic.node_set_target_raid_config node_ident=<% $.node_uuid %> target_raid_config=<% $.configuration %>\n on-success: apply_configuration\n on-error: set_configuration_failed\n\n set_configuration_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_configuration).result %>\n\n apply_configuration:\n with-items: node_uuid in <% $.node_uuids %>\n workflow: tripleo.baremetal.v1.manual_cleaning\n input:\n node_uuid: <% $.node_uuid %>\n clean_steps:\n - interface: raid\n step: delete_configuration\n - interface: raid\n step: create_configuration\n timeout: 1800 # building RAID should be fast than general cleaning\n retry_count: 180\n retry_delay: 10\n on-success: send_message\n on-error: apply_configuration_failed\n publish:\n message: <% task().result %>\n status: SUCCESS\n\n apply_configuration_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(apply_configuration).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.create_raid_configuration\n payload:\n status: <% $.get('status', 'FAILED') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n\n cellv2_discovery:\n description: Run cell_v2 host discovery\n\n input:\n - node_uuids\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n cell_v2_discover_hosts:\n on-success: wait_for_nova_resources\n on-error: cell_v2_discover_hosts_failed\n action: tripleo.baremetal.cell_v2_discover_hosts\n\n cell_v2_discover_hosts_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(cell_v2_discover_hosts).result %>\n\n wait_for_nova_resources:\n on-success: send_message\n on-error: wait_for_nova_resources_failed\n with-items: node_uuid in <% $.node_uuids %>\n action: nova.hypervisors_find hypervisor_hostname=<% $.node_uuid %>\n\n wait_for_nova_resources_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(wait_for_nova_resources).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.cellv2_discovery\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n\n discover_nodes:\n description: Run nodes discovery over the given IP range\n\n input:\n - ip_addresses\n - credentials\n - ports: [623]\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_all_nodes:\n action: ironic.node_list\n input:\n fields: [\"uuid\", \"driver\", \"driver_info\"]\n limit: 0\n on-success: get_candidate_nodes\n on-error: get_all_nodes_failed\n publish:\n existing_nodes: <% task().result %>\n\n get_all_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_all_nodes).result %>\n\n get_candidate_nodes:\n action: tripleo.baremetal.get_candidate_nodes\n input:\n ip_addresses: <% $.ip_addresses %>\n credentials: <% $.credentials %>\n ports: <% $.ports %>\n existing_nodes: <% $.existing_nodes %>\n on-success: probe_nodes\n on-error: get_candidate_nodes_failed\n publish:\n candidates: <% task().result %>\n\n get_candidate_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_candidate_nodes).result %>\n\n probe_nodes:\n action: tripleo.baremetal.probe_node\n on-success: send_message\n on-error: probe_nodes_failed\n input:\n ip: <% $.node.ip %>\n port: <% $.node.port %>\n username: <% $.node.username %>\n password: <% $.node.password %>\n with-items:\n - node in <% $.candidates %>\n publish:\n nodes_json: <% task().result.where($ != null) %>\n\n probe_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(probe_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.discover_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n nodes_json: <% $.get('nodes_json', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n discover_and_enroll_nodes:\n description: Run nodes discovery over the given IP range and enroll nodes\n\n input:\n - ip_addresses\n - credentials\n - ports: [623]\n - kernel_name: null\n - ramdisk_name: null\n - instance_boot_option: local\n - initial_state: manageable\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n discover_nodes:\n workflow: tripleo.baremetal.v1.discover_nodes\n input:\n ip_addresses: <% $.ip_addresses %>\n ports: <% $.ports %>\n credentials: <% $.credentials %>\n queue_name: <% $.queue_name %>\n on-success: enroll_nodes\n on-error: discover_nodes_failed\n publish:\n nodes_json: <% task().result.nodes_json %>\n\n discover_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(discover_nodes).result %>\n\n enroll_nodes:\n workflow: tripleo.baremetal.v1.register_or_update\n input:\n nodes_json: <% $.nodes_json %>\n kernel_name: <% $.kernel_name %>\n ramdisk_name: <% $.ramdisk_name %>\n instance_boot_option: <% $.instance_boot_option %>\n initial_state: <% $.initial_state %>\n on-success: send_message\n on-error: enroll_nodes_failed\n publish:\n registered_nodes: <% task().result.registered_nodes %>\n\n enroll_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(enroll_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.discover_and_enroll_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n registered_nodes: <% $.get('registered_nodes', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1", "tags": [], "created_at": "2018-06-26 04:26:44", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "cf4a45ce-3d63-44a4-bb79-4cc454eff440"} > >2018-06-26 09:56:44,178 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:44,262 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.storage.v1 >description: TripleO manages Ceph with ceph-ansible > >workflows: > ceph-install: > # allows for additional extra_vars via workflow input > input: > - ansible_playbook_verbosity: 0 > - ansible_skip_tags: 'package-install,with_pkg' > - ansible_env_variables: {} > - ansible_extra_env_variables: > ANSIBLE_CONFIG: /usr/share/ceph-ansible/ansible.cfg > ANSIBLE_ACTION_PLUGINS: /usr/share/ceph-ansible/plugins/actions/ > ANSIBLE_ROLES_PATH: /usr/share/ceph-ansible/roles/ > ANSIBLE_RETRY_FILES_ENABLED: 'False' > ANSIBLE_LOG_PATH: /var/log/mistral/ceph-install-workflow.log > ANSIBLE_LIBRARY: /usr/share/ceph-ansible/library/ > ANSIBLE_SSH_RETRIES: '3' > ANSIBLE_HOST_KEY_CHECKING: 'False' > DEFAULT_FORKS: '25' > - ceph_ansible_extra_vars: {} > - ceph_ansible_playbook: /usr/share/ceph-ansible/site-docker.yml.sample > - node_data_lookup: '{}' > tags: > - tripleo-common-managed > tasks: > collect_puppet_hieradata: > on-success: check_hieradata > publish: > hieradata: <% env().get('role_merged_configs', {}).values().select($.keys()).flatten().select(regex('^ceph::profile::params::osds$').search($)).where($ != null).toSet() %> > check_hieradata: > on-success: > - set_blacklisted_ips: <% not bool($.hieradata) %> > - fail(msg=<% 'Ceph deployment stopped, puppet-ceph hieradata found. Convert it into ceph-ansible variables. {0}'.format($.hieradata) %>): <% bool($.hieradata) %> > set_blacklisted_ips: > publish: > blacklisted_ips: <% env().get('blacklisted_ip_addresses', []) %> > on-success: set_ip_lists > set_ip_lists: > publish: > mgr_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_mgr_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > mon_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_mon_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > osd_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_osd_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > mds_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_mds_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > rgw_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_rgw_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > nfs_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_nfs_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > rbdmirror_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_rbdmirror_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > client_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_client_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > on-success: merge_ip_lists > merge_ip_lists: > publish: > ips_list: <% ($.mgr_ips + $.mon_ips + $.osd_ips + $.mds_ips + $.rgw_ips + $.nfs_ips + $.rbdmirror_ips + $.client_ips).toSet() %> > on-success: enable_ssh_admin > enable_ssh_admin: > workflow: tripleo.access.v1.enable_ssh_admin > input: > ssh_servers: <% $.ips_list %> > on-success: get_private_key > get_private_key: > action: tripleo.validations.get_privkey > publish: > private_key: <% task().result %> > on-success: make_fetch_directory > make_fetch_directory: > action: tripleo.files.make_temp_dir > publish: > fetch_directory: <% task().result.path %> > on-success: collect_nodes_uuid > collect_nodes_uuid: > action: tripleo.ansible-playbook > input: > inventory: > overcloud: > hosts: <% $.ips_list.toDict($, {}) %> > remote_user: tripleo-admin > become: true > become_user: root > verbosity: 0 > ssh_private_key: <% $.private_key %> > #NOTE(gfidente): set ANSIBLE_CALLBACK_WHITELIST to empty string to avoid spurious output > #in the json output. The publish: directive will in fact parse the output. > extra_env_variables: > ANSIBLE_CALLBACK_WHITELIST: '' > ANSIBLE_HOST_KEY_CHECKING: 'False' > ANSIBLE_STDOUT_CALLBACK: 'json' > playbook: > - hosts: overcloud > gather_facts: no > tasks: > - name: collect machine id > command: dmidecode -s system-uuid > publish: > ansible_output: <% json_parse(task().result.stderr) %> > on-success: set_ip_uuids > set_ip_uuids: > publish: > ip_uuids: <% let(root => $.ansible_output.get('plays')[0].get('tasks')[0].get('hosts')) -> $.ips_list.toDict($, $root.get($).get('stdout')) %> > on-success: parse_node_data_lookup > parse_node_data_lookup: > publish: > json_node_data_lookup: <% json_parse($.node_data_lookup) %> > on-success: map_node_data_lookup > map_node_data_lookup: > publish: > ips_data: <% let(uuids => $.ip_uuids, root => $) -> $.ips_list.toDict($, $root.json_node_data_lookup.get($uuids.get($, "NO-UUID-FOUND"), {})) %> > on-success: set_role_vars > set_role_vars: > publish: > # NOTE(gfidente): collect role settings from all tht roles > mgr_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_mgr_ansible_vars', {})).aggregate($1 + $2) %> > mon_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_mon_ansible_vars', {})).aggregate($1 + $2) %> > osd_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_osd_ansible_vars', {})).aggregate($1 + $2) %> > mds_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_mds_ansible_vars', {})).aggregate($1 + $2) %> > rgw_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_rgw_ansible_vars', {})).aggregate($1 + $2) %> > nfs_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_nfs_ansible_vars', {})).aggregate($1 + $2) %> > rbdmirror_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_rbdmirror_ansible_vars', {})).aggregate($1 + $2) %> > client_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_client_ansible_vars', {})).aggregate($1 + $2) %> > on-success: build_extra_vars > build_extra_vars: > publish: > # NOTE(gfidente): merge vars from all ansible roles > extra_vars: <% {'fetch_directory'=> $.fetch_directory} + $.mgr_vars + $.mon_vars + $.osd_vars + $.mds_vars + $.rgw_vars + $.nfs_vars + $.client_vars + $.rbdmirror_vars + $.ceph_ansible_extra_vars %> > on-success: ceph_install > ceph_install: > with-items: playbook in <% list($.ceph_ansible_playbook).flatten() %> > concurrency: 1 > action: tripleo.ansible-playbook > input: > inventory: > mgrs: > hosts: <% let(root => $) -> $.mgr_ips.toDict($, $root.ips_data.get($, {})) %> > mons: > hosts: <% let(root => $) -> $.mon_ips.toDict($, $root.ips_data.get($, {})) %> > osds: > hosts: <% let(root => $) -> $.osd_ips.toDict($, $root.ips_data.get($, {})) %> > mdss: > hosts: <% let(root => $) -> $.mds_ips.toDict($, $root.ips_data.get($, {})) %> > rgws: > hosts: <% let(root => $) -> $.rgw_ips.toDict($, $root.ips_data.get($, {})) %> > nfss: > hosts: <% let(root => $) -> $.nfs_ips.toDict($, $root.ips_data.get($, {})) %> > rbdmirrors: > hosts: <% let(root => $) -> $.rbdmirror_ips.toDict($, $root.ips_data.get($, {})) %> > clients: > hosts: <% let(root => $) -> $.client_ips.toDict($, $root.ips_data.get($, {})) %> > all: > vars: <% $.extra_vars %> > playbook: <% $.playbook %> > remote_user: tripleo-admin > become: true > become_user: root > verbosity: <% $.ansible_playbook_verbosity %> > ssh_private_key: <% $.private_key %> > skip_tags: <% $.ansible_skip_tags %> > extra_env_variables: <% $.ansible_extra_env_variables.mergeWith($.ansible_env_variables) %> > extra_vars: > ireallymeanit: 'yes' > publish: > output: <% task().result %> > on-complete: purge_fetch_directory > purge_fetch_directory: > action: tripleo.files.remove_temp_dir path=<% $.fetch_directory %> >' >2018-06-26 09:56:44,591 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 9123 >2018-06-26 09:56:44,592 DEBUG: RESP: [201] Content-Length: 9123 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:44 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.storage.v1\ndescription: TripleO manages Ceph with ceph-ansible\n\nworkflows:\n ceph-install:\n # allows for additional extra_vars via workflow input\n input:\n - ansible_playbook_verbosity: 0\n - ansible_skip_tags: 'package-install,with_pkg'\n - ansible_env_variables: {}\n - ansible_extra_env_variables:\n ANSIBLE_CONFIG: /usr/share/ceph-ansible/ansible.cfg\n ANSIBLE_ACTION_PLUGINS: /usr/share/ceph-ansible/plugins/actions/\n ANSIBLE_ROLES_PATH: /usr/share/ceph-ansible/roles/\n ANSIBLE_RETRY_FILES_ENABLED: 'False'\n ANSIBLE_LOG_PATH: /var/log/mistral/ceph-install-workflow.log\n ANSIBLE_LIBRARY: /usr/share/ceph-ansible/library/\n ANSIBLE_SSH_RETRIES: '3'\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n DEFAULT_FORKS: '25'\n - ceph_ansible_extra_vars: {}\n - ceph_ansible_playbook: /usr/share/ceph-ansible/site-docker.yml.sample\n - node_data_lookup: '{}'\n tags:\n - tripleo-common-managed\n tasks:\n collect_puppet_hieradata:\n on-success: check_hieradata\n publish:\n hieradata: <% env().get('role_merged_configs', {}).values().select($.keys()).flatten().select(regex('^ceph::profile::params::osds$').search($)).where($ != null).toSet() %>\n check_hieradata:\n on-success:\n - set_blacklisted_ips: <% not bool($.hieradata) %>\n - fail(msg=<% 'Ceph deployment stopped, puppet-ceph hieradata found. Convert it into ceph-ansible variables. {0}'.format($.hieradata) %>): <% bool($.hieradata) %>\n set_blacklisted_ips:\n publish:\n blacklisted_ips: <% env().get('blacklisted_ip_addresses', []) %>\n on-success: set_ip_lists\n set_ip_lists:\n publish:\n mgr_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_mgr_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n mon_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_mon_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n osd_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_osd_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n mds_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_mds_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n rgw_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_rgw_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n nfs_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_nfs_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n rbdmirror_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_rbdmirror_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n client_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_client_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n on-success: merge_ip_lists\n merge_ip_lists:\n publish:\n ips_list: <% ($.mgr_ips + $.mon_ips + $.osd_ips + $.mds_ips + $.rgw_ips + $.nfs_ips + $.rbdmirror_ips + $.client_ips).toSet() %>\n on-success: enable_ssh_admin\n enable_ssh_admin:\n workflow: tripleo.access.v1.enable_ssh_admin\n input:\n ssh_servers: <% $.ips_list %>\n on-success: get_private_key\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: make_fetch_directory\n make_fetch_directory:\n action: tripleo.files.make_temp_dir\n publish:\n fetch_directory: <% task().result.path %>\n on-success: collect_nodes_uuid\n collect_nodes_uuid:\n action: tripleo.ansible-playbook\n input:\n inventory:\n overcloud:\n hosts: <% $.ips_list.toDict($, {}) %>\n remote_user: tripleo-admin\n become: true\n become_user: root\n verbosity: 0\n ssh_private_key: <% $.private_key %>\n #NOTE(gfidente): set ANSIBLE_CALLBACK_WHITELIST to empty string to avoid spurious output\n #in the json output. The publish: directive will in fact parse the output.\n extra_env_variables:\n ANSIBLE_CALLBACK_WHITELIST: ''\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n ANSIBLE_STDOUT_CALLBACK: 'json'\n playbook:\n - hosts: overcloud\n gather_facts: no\n tasks:\n - name: collect machine id\n command: dmidecode -s system-uuid\n publish:\n ansible_output: <% json_parse(task().result.stderr) %>\n on-success: set_ip_uuids\n set_ip_uuids:\n publish:\n ip_uuids: <% let(root => $.ansible_output.get('plays')[0].get('tasks')[0].get('hosts')) -> $.ips_list.toDict($, $root.get($).get('stdout')) %>\n on-success: parse_node_data_lookup\n parse_node_data_lookup:\n publish:\n json_node_data_lookup: <% json_parse($.node_data_lookup) %>\n on-success: map_node_data_lookup\n map_node_data_lookup:\n publish:\n ips_data: <% let(uuids => $.ip_uuids, root => $) -> $.ips_list.toDict($, $root.json_node_data_lookup.get($uuids.get($, \"NO-UUID-FOUND\"), {})) %>\n on-success: set_role_vars\n set_role_vars:\n publish:\n # NOTE(gfidente): collect role settings from all tht roles\n mgr_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_mgr_ansible_vars', {})).aggregate($1 + $2) %>\n mon_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_mon_ansible_vars', {})).aggregate($1 + $2) %>\n osd_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_osd_ansible_vars', {})).aggregate($1 + $2) %>\n mds_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_mds_ansible_vars', {})).aggregate($1 + $2) %>\n rgw_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_rgw_ansible_vars', {})).aggregate($1 + $2) %>\n nfs_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_nfs_ansible_vars', {})).aggregate($1 + $2) %>\n rbdmirror_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_rbdmirror_ansible_vars', {})).aggregate($1 + $2) %>\n client_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_client_ansible_vars', {})).aggregate($1 + $2) %>\n on-success: build_extra_vars\n build_extra_vars:\n publish:\n # NOTE(gfidente): merge vars from all ansible roles\n extra_vars: <% {'fetch_directory'=> $.fetch_directory} + $.mgr_vars + $.mon_vars + $.osd_vars + $.mds_vars + $.rgw_vars + $.nfs_vars + $.client_vars + $.rbdmirror_vars + $.ceph_ansible_extra_vars %>\n on-success: ceph_install\n ceph_install:\n with-items: playbook in <% list($.ceph_ansible_playbook).flatten() %>\n concurrency: 1\n action: tripleo.ansible-playbook\n input:\n inventory:\n mgrs:\n hosts: <% let(root => $) -> $.mgr_ips.toDict($, $root.ips_data.get($, {})) %>\n mons:\n hosts: <% let(root => $) -> $.mon_ips.toDict($, $root.ips_data.get($, {})) %>\n osds:\n hosts: <% let(root => $) -> $.osd_ips.toDict($, $root.ips_data.get($, {})) %>\n mdss:\n hosts: <% let(root => $) -> $.mds_ips.toDict($, $root.ips_data.get($, {})) %>\n rgws:\n hosts: <% let(root => $) -> $.rgw_ips.toDict($, $root.ips_data.get($, {})) %>\n nfss:\n hosts: <% let(root => $) -> $.nfs_ips.toDict($, $root.ips_data.get($, {})) %>\n rbdmirrors:\n hosts: <% let(root => $) -> $.rbdmirror_ips.toDict($, $root.ips_data.get($, {})) %>\n clients:\n hosts: <% let(root => $) -> $.client_ips.toDict($, $root.ips_data.get($, {})) %>\n all:\n vars: <% $.extra_vars %>\n playbook: <% $.playbook %>\n remote_user: tripleo-admin\n become: true\n become_user: root\n verbosity: <% $.ansible_playbook_verbosity %>\n ssh_private_key: <% $.private_key %>\n skip_tags: <% $.ansible_skip_tags %>\n extra_env_variables: <% $.ansible_extra_env_variables.mergeWith($.ansible_env_variables) %>\n extra_vars:\n ireallymeanit: 'yes'\n publish:\n output: <% task().result %>\n on-complete: purge_fetch_directory\n purge_fetch_directory:\n action: tripleo.files.remove_temp_dir path=<% $.fetch_directory %>\n", "name": "tripleo.storage.v1", "tags": [], "created_at": "2018-06-26 04:26:44", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "f885c0b0-d74b-4b91-a497-5934fa4ab3e6"} > >2018-06-26 09:56:44,593 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:44,594 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.scale.v1 >description: TripleO Overcloud Deployment Workflows v1 > >workflows: > > delete_node: > description: deletes given overcloud nodes and updates the stack > > input: > - container > - nodes > - timeout: 240 > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > delete_node: > action: tripleo.scale.delete_node nodes=<% $.nodes %> timeout=<% $.timeout %> container=<% $.container %> > on-success: wait_for_stack_in_progress > on-error: set_delete_node_failed > > set_delete_node_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(delete_node).result %> > > wait_for_stack_in_progress: > workflow: tripleo.stack.v1.wait_for_stack_in_progress stack=<% $.container %> > on-success: wait_for_stack_complete > on-error: wait_for_stack_in_progress_failed > > wait_for_stack_in_progress_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(wait_for_stack_in_progress).result %> > > wait_for_stack_complete: > workflow: tripleo.stack.v1.wait_for_stack_complete_or_failed stack=<% $.container %> > on-success: send_message > on-error: wait_for_stack_complete_failed > > wait_for_stack_complete_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(wait_for_stack_complete).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.scale.v1.delete_node > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 09:56:44,755 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 2258 >2018-06-26 09:56:44,756 DEBUG: RESP: [201] Content-Length: 2258 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:44 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.scale.v1\ndescription: TripleO Overcloud Deployment Workflows v1\n\nworkflows:\n\n delete_node:\n description: deletes given overcloud nodes and updates the stack\n\n input:\n - container\n - nodes\n - timeout: 240\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n delete_node:\n action: tripleo.scale.delete_node nodes=<% $.nodes %> timeout=<% $.timeout %> container=<% $.container %>\n on-success: wait_for_stack_in_progress\n on-error: set_delete_node_failed\n\n set_delete_node_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(delete_node).result %>\n\n wait_for_stack_in_progress:\n workflow: tripleo.stack.v1.wait_for_stack_in_progress stack=<% $.container %>\n on-success: wait_for_stack_complete\n on-error: wait_for_stack_in_progress_failed\n\n wait_for_stack_in_progress_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(wait_for_stack_in_progress).result %>\n\n wait_for_stack_complete:\n workflow: tripleo.stack.v1.wait_for_stack_complete_or_failed stack=<% $.container %>\n on-success: send_message\n on-error: wait_for_stack_complete_failed\n\n wait_for_stack_complete_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(wait_for_stack_complete).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.scale.v1.delete_node\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.scale.v1", "tags": [], "created_at": "2018-06-26 04:26:44", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "eb71ceb5-2707-4ee0-beb3-42bd0af02456"} > >2018-06-26 09:56:44,756 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:44,757 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.octavia_post.v1 >description: TripleO Octavia post deployment Workflows > >workflows: > > octavia_post_deploy: > description: Octavia post deployment > input: > - amp_image_name > - amp_image_filename > - amp_image_tag > - amp_ssh_key_name > - amp_ssh_key_path > - amp_ssh_key_data > - auth_username > - auth_password > - auth_project_name > - lb_mgmt_net_name > - lb_mgmt_subnet_name > - lb_sec_group_name > - lb_mgmt_subnet_cidr > - lb_mgmt_subnet_gateway > - lb_mgmt_subnet_pool_start > - lb_mgmt_subnet_pool_end > - generate_certs > - octavia_ansible_playbook > - overcloud_admin > - ca_cert_path > - ca_private_key_path > - ca_passphrase > - client_cert_path > - mgmt_port_dev > - overcloud_password > - overcloud_project > - overcloud_pub_auth_uri > - ansible_extra_env_variables: > ANSIBLE_HOST_KEY_CHECKING: 'False' > ANSIBLE_SSH_RETRIES: '3' > tags: > - tripleo-common-managed > tasks: > get_overcloud_stack_details: > publish: > # TODO(beagles), we are making an assumption about the octavia heatlh manager and > # controller worker needing > # > octavia_controller_ips: <% env().get('service_ips', {}).get('octavia_worker_ctlplane_node_ips', []) %> > on-success: enable_ssh_admin > > enable_ssh_admin: > workflow: tripleo.access.v1.enable_ssh_admin > input: > ssh_servers: <% $.octavia_controller_ips %> > on-success: get_private_key > > get_private_key: > action: tripleo.validations.get_privkey > publish: > private_key: <% task().result %> > on-success: make_local_temp_directory > > make_local_temp_directory: > action: tripleo.files.make_temp_dir > publish: > undercloud_local_dir: <% task().result.path %> > on-success: make_remote_temp_directory > > make_remote_temp_directory: > action: tripleo.files.make_temp_dir > publish: > undercloud_remote_dir: <% task().result.path %> > on-success: build_local_connection_environment_vars > > build_local_connection_environment_vars: > publish: > ansible_local_connection_variables: <% dict('ANSIBLE_REMOTE_TEMP' => $.undercloud_remote_dir, 'ANSIBLE_LOCAL_TEMP' => $.undercloud_local_dir) + $.ansible_extra_env_variables %> > on-success: upload_amphora > > upload_amphora: > action: tripleo.ansible-playbook > input: > inventory: > undercloud: > hosts: > localhost: > ansible_connection: local > > playbook: <% $.octavia_ansible_playbook %> > remote_user: stack > extra_env_variables: <% $.ansible_local_connection_variables %> > extra_vars: > os_password: <% $.overcloud_password %> > os_username: <% $.overcloud_admin %> > os_project_name: <% $.overcloud_project %> > os_auth_url: <% $.overcloud_pub_auth_uri %> > os_auth_type: "password" > os_identity_api_version: "3" > amp_image_name: <% $.amp_image_name %> > amp_image_filename: <% $.amp_image_filename %> > amp_image_tag: <% $.amp_image_tag %> > amp_ssh_key_name: <% $.amp_ssh_key_name %> > amp_ssh_key_path: <% $.amp_ssh_key_path %> > amp_ssh_key_data: <% $.amp_ssh_key_data %> > auth_username: <% $.auth_username %> > auth_password: <% $.auth_password %> > auth_project_name: <% $.auth_project_name %> > on-success: config_octavia > > config_octavia: > action: tripleo.ansible-playbook > input: > inventory: > octavia_nodes: > hosts: <% $.octavia_controller_ips.toDict($, {}) %> > verbosity: 0 > playbook: <% $.octavia_ansible_playbook %> > remote_user: tripleo-admin > become: true > become_user: root > ssh_private_key: <% $.private_key %> > ssh_common_args: '-o StrictHostKeyChecking=no' > ssh_extra_args: '-o UserKnownHostsFile=/dev/null' > extra_env_variables: <% $.ansible_extra_env_variables %> > extra_vars: > os_password: <% $.overcloud_password %> > os_username: <% $.overcloud_admin %> > os_project_name: <% $.overcloud_project %> > os_auth_url: <% $.overcloud_pub_auth_uri %> > os_auth_type: "password" > os_identity_api_version: "3" > amp_image_tag: <% $.amp_image_tag %> > lb_mgmt_net_name: <% $.lb_mgmt_net_name %> > lb_mgmt_subnet_name: <% $.lb_mgmt_subnet_name %> > lb_sec_group_name: <% $.lb_sec_group_name %> > lb_mgmt_subnet_cidr: <% $.lb_mgmt_subnet_cidr %> > lb_mgmt_subnet_gateway: <% $.lb_mgmt_subnet_gateway %> > lb_mgmt_subnet_pool_start: <% $.lb_mgmt_subnet_pool_start %> > lb_mgmt_subnet_pool_end: <% $.lb_mgmt_subnet_pool_end %> > ca_cert_path: <% $.ca_cert_path %> > ca_private_key_path: <% $.ca_private_key_path %> > ca_passphrase: <% $.ca_passphrase %> > client_cert_path: <% $.client_cert_path %> > generate_certs: <% $.generate_certs %> > mgmt_port_dev: <% $.mgmt_port_dev %> > auth_project_name: <% $.auth_project_name %> > on-complete: purge_local_temp_dir > purge_local_temp_dir: > action: tripleo.files.remove_temp_dir path=<% $.undercloud_local_dir %> > on-complete: purge_remote_temp_dir > purge_remote_temp_dir: > action: tripleo.files.remove_temp_dir path=<% $.undercloud_remote_dir %> > >' >2018-06-26 09:56:44,967 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 6113 >2018-06-26 09:56:44,968 DEBUG: RESP: [201] Content-Length: 6113 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:44 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.octavia_post.v1\ndescription: TripleO Octavia post deployment Workflows\n\nworkflows:\n\n octavia_post_deploy:\n description: Octavia post deployment\n input:\n - amp_image_name\n - amp_image_filename\n - amp_image_tag\n - amp_ssh_key_name\n - amp_ssh_key_path\n - amp_ssh_key_data\n - auth_username\n - auth_password\n - auth_project_name\n - lb_mgmt_net_name\n - lb_mgmt_subnet_name\n - lb_sec_group_name\n - lb_mgmt_subnet_cidr\n - lb_mgmt_subnet_gateway\n - lb_mgmt_subnet_pool_start\n - lb_mgmt_subnet_pool_end\n - generate_certs\n - octavia_ansible_playbook\n - overcloud_admin\n - ca_cert_path\n - ca_private_key_path\n - ca_passphrase\n - client_cert_path\n - mgmt_port_dev\n - overcloud_password\n - overcloud_project\n - overcloud_pub_auth_uri\n - ansible_extra_env_variables:\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n ANSIBLE_SSH_RETRIES: '3'\n tags:\n - tripleo-common-managed\n tasks:\n get_overcloud_stack_details:\n publish:\n # TODO(beagles), we are making an assumption about the octavia heatlh manager and\n # controller worker needing\n #\n octavia_controller_ips: <% env().get('service_ips', {}).get('octavia_worker_ctlplane_node_ips', []) %>\n on-success: enable_ssh_admin\n\n enable_ssh_admin:\n workflow: tripleo.access.v1.enable_ssh_admin\n input:\n ssh_servers: <% $.octavia_controller_ips %>\n on-success: get_private_key\n\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: make_local_temp_directory\n\n make_local_temp_directory:\n action: tripleo.files.make_temp_dir\n publish:\n undercloud_local_dir: <% task().result.path %>\n on-success: make_remote_temp_directory\n\n make_remote_temp_directory:\n action: tripleo.files.make_temp_dir\n publish:\n undercloud_remote_dir: <% task().result.path %>\n on-success: build_local_connection_environment_vars\n\n build_local_connection_environment_vars:\n publish:\n ansible_local_connection_variables: <% dict('ANSIBLE_REMOTE_TEMP' => $.undercloud_remote_dir, 'ANSIBLE_LOCAL_TEMP' => $.undercloud_local_dir) + $.ansible_extra_env_variables %>\n on-success: upload_amphora\n\n upload_amphora:\n action: tripleo.ansible-playbook\n input:\n inventory:\n undercloud:\n hosts:\n localhost:\n ansible_connection: local\n\n playbook: <% $.octavia_ansible_playbook %>\n remote_user: stack\n extra_env_variables: <% $.ansible_local_connection_variables %>\n extra_vars:\n os_password: <% $.overcloud_password %>\n os_username: <% $.overcloud_admin %>\n os_project_name: <% $.overcloud_project %>\n os_auth_url: <% $.overcloud_pub_auth_uri %>\n os_auth_type: \"password\"\n os_identity_api_version: \"3\"\n amp_image_name: <% $.amp_image_name %>\n amp_image_filename: <% $.amp_image_filename %>\n amp_image_tag: <% $.amp_image_tag %>\n amp_ssh_key_name: <% $.amp_ssh_key_name %>\n amp_ssh_key_path: <% $.amp_ssh_key_path %>\n amp_ssh_key_data: <% $.amp_ssh_key_data %>\n auth_username: <% $.auth_username %>\n auth_password: <% $.auth_password %>\n auth_project_name: <% $.auth_project_name %>\n on-success: config_octavia\n\n config_octavia:\n action: tripleo.ansible-playbook\n input:\n inventory:\n octavia_nodes:\n hosts: <% $.octavia_controller_ips.toDict($, {}) %>\n verbosity: 0\n playbook: <% $.octavia_ansible_playbook %>\n remote_user: tripleo-admin\n become: true\n become_user: root\n ssh_private_key: <% $.private_key %>\n ssh_common_args: '-o StrictHostKeyChecking=no'\n ssh_extra_args: '-o UserKnownHostsFile=/dev/null'\n extra_env_variables: <% $.ansible_extra_env_variables %>\n extra_vars:\n os_password: <% $.overcloud_password %>\n os_username: <% $.overcloud_admin %>\n os_project_name: <% $.overcloud_project %>\n os_auth_url: <% $.overcloud_pub_auth_uri %>\n os_auth_type: \"password\"\n os_identity_api_version: \"3\"\n amp_image_tag: <% $.amp_image_tag %>\n lb_mgmt_net_name: <% $.lb_mgmt_net_name %>\n lb_mgmt_subnet_name: <% $.lb_mgmt_subnet_name %>\n lb_sec_group_name: <% $.lb_sec_group_name %>\n lb_mgmt_subnet_cidr: <% $.lb_mgmt_subnet_cidr %>\n lb_mgmt_subnet_gateway: <% $.lb_mgmt_subnet_gateway %>\n lb_mgmt_subnet_pool_start: <% $.lb_mgmt_subnet_pool_start %>\n lb_mgmt_subnet_pool_end: <% $.lb_mgmt_subnet_pool_end %>\n ca_cert_path: <% $.ca_cert_path %>\n ca_private_key_path: <% $.ca_private_key_path %>\n ca_passphrase: <% $.ca_passphrase %>\n client_cert_path: <% $.client_cert_path %>\n generate_certs: <% $.generate_certs %>\n mgmt_port_dev: <% $.mgmt_port_dev %>\n auth_project_name: <% $.auth_project_name %>\n on-complete: purge_local_temp_dir\n purge_local_temp_dir:\n action: tripleo.files.remove_temp_dir path=<% $.undercloud_local_dir %>\n on-complete: purge_remote_temp_dir\n purge_remote_temp_dir:\n action: tripleo.files.remove_temp_dir path=<% $.undercloud_remote_dir %>\n\n", "name": "tripleo.octavia_post.v1", "tags": [], "created_at": "2018-06-26 04:26:44", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "8a791b15-4dc3-43fd-a3b2-1eb3e12d4e50"} > >2018-06-26 09:56:44,968 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:44,976 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.fernet_keys.v1 >description: TripleO fernet key rotation workflows > >workflows: > > rotate_fernet_keys: > > input: > - container > - queue_name: tripleo > - ansible_extra_env_variables: > ANSIBLE_HOST_KEY_CHECKING: 'False' > > tags: > - tripleo-common-managed > > tasks: > > rotate_keys: > action: tripleo.parameters.rotate_fernet_keys container=<% $.container %> > on-success: deploy_ssh_key > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > deploy_ssh_key: > workflow: tripleo.validations.v1.copy_ssh_key > on-success: get_privkey > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > get_privkey: > action: tripleo.validations.get_privkey > on-success: deploy_keys > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > deploy_keys: > action: tripleo.ansible-playbook > input: > hosts: keystone > inventory: /usr/bin/tripleo-ansible-inventory > ssh_private_key: <% task(get_privkey).result %> > extra_env_variables: <% $.ansible_extra_env_variables + dict(TRIPLEO_PLAN_NAME=>$.container) %> > verbosity: 0 > remote_user: heat-admin > become: true > extra_vars: > fernet_keys: <% task(rotate_keys).result %> > use_openstack_credentials: true > playbook: /usr/share/tripleo-common/playbooks/rotate-keys.yaml > on-success: notify_zaqar > publish: > status: SUCCESS > message: <% task().result %> > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.fernet_keys.v1.rotate_fernet_keys > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 09:56:45,119 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 2609 >2018-06-26 09:56:45,120 DEBUG: RESP: [201] Content-Length: 2609 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:45 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.fernet_keys.v1\ndescription: TripleO fernet key rotation workflows\n\nworkflows:\n\n rotate_fernet_keys:\n\n input:\n - container\n - queue_name: tripleo\n - ansible_extra_env_variables:\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n rotate_keys:\n action: tripleo.parameters.rotate_fernet_keys container=<% $.container %>\n on-success: deploy_ssh_key\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n deploy_ssh_key:\n workflow: tripleo.validations.v1.copy_ssh_key\n on-success: get_privkey\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_privkey:\n action: tripleo.validations.get_privkey\n on-success: deploy_keys\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n deploy_keys:\n action: tripleo.ansible-playbook\n input:\n hosts: keystone\n inventory: /usr/bin/tripleo-ansible-inventory\n ssh_private_key: <% task(get_privkey).result %>\n extra_env_variables: <% $.ansible_extra_env_variables + dict(TRIPLEO_PLAN_NAME=>$.container) %>\n verbosity: 0\n remote_user: heat-admin\n become: true\n extra_vars:\n fernet_keys: <% task(rotate_keys).result %>\n use_openstack_credentials: true\n playbook: /usr/share/tripleo-common/playbooks/rotate-keys.yaml\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.fernet_keys.v1.rotate_fernet_keys\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.fernet_keys.v1", "tags": [], "created_at": "2018-06-26 04:26:45", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "b1b21be4-a014-4093-8bae-efcf68ce7756"} > >2018-06-26 09:56:45,120 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:45,121 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.swift_ring.v1 >description: Rebalance and distribute Swift rings using Ansible > > >workflows: > rebalance: > tags: > - tripleo-common-managed > > tasks: > get_private_key: > action: tripleo.validations.get_privkey > on-success: deploy_rings > > deploy_rings: > action: tripleo.ansible-playbook > publish: > output: <% task().result %> > input: > ssh_private_key: <% task(get_private_key).result %> > ssh_common_args: '-o StrictHostKeyChecking=no' > ssh_extra_args: '-o UserKnownHostsFile=/dev/null' > verbosity: 1 > remote_user: heat-admin > become: true > become_user: root > playbook: /usr/share/tripleo-common/playbooks/swift_ring_rebalance.yaml > inventory: /usr/bin/tripleo-ansible-inventory > use_openstack_credentials: true >' >2018-06-26 09:56:45,180 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 1140 >2018-06-26 09:56:45,181 DEBUG: RESP: [201] Content-Length: 1140 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:45 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.swift_ring.v1\ndescription: Rebalance and distribute Swift rings using Ansible\n\n\nworkflows:\n rebalance:\n tags:\n - tripleo-common-managed\n\n tasks:\n get_private_key:\n action: tripleo.validations.get_privkey\n on-success: deploy_rings\n\n deploy_rings:\n action: tripleo.ansible-playbook\n publish:\n output: <% task().result %>\n input:\n ssh_private_key: <% task(get_private_key).result %>\n ssh_common_args: '-o StrictHostKeyChecking=no'\n ssh_extra_args: '-o UserKnownHostsFile=/dev/null'\n verbosity: 1\n remote_user: heat-admin\n become: true\n become_user: root\n playbook: /usr/share/tripleo-common/playbooks/swift_ring_rebalance.yaml\n inventory: /usr/bin/tripleo-ansible-inventory\n use_openstack_credentials: true\n", "name": "tripleo.swift_ring.v1", "tags": [], "created_at": "2018-06-26 04:26:45", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "39757856-dbd7-41f3-ad84-d1a9a7398cee"} > >2018-06-26 09:56:45,181 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:45,182 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.networks.v1 >description: TripleO Overcloud Networks Workflows v1 > >workflows: > > validate_networks_input: > description: > > Validate that required fields are present. > > input: > - networks > - queue_name: tripleo > > output: > result: <% task(validate_network_names).result %> > > tags: > - tripleo-common-managed > > tasks: > validate_network_names: > publish: > network_name_present: <% $.networks.all($.containsKey('name')) %> > on-success: > - set_status_success: <% $.network_name_present = true %> > - set_status_error: <% $.network_name_present = false %> > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > set_status_success: > on-success: notify_zaqar > publish: > status: SUCCESS > message: <% task(validate_network_names).result %> > > set_status_error: > on-success: notify_zaqar > publish: > status: FAILED > message: "One or more entries did not contain the required field 'name'" > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.networks.v1.validate_networks_input > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > update_networks: > description: > > Takes data in networks parameter in json format, validates its contents, > and persists them in network_data.yaml. After successful update, > templates are regenerated. > > input: > - container: overcloud > - networks > - network_data_file: 'network_data.yaml' > - queue_name: tripleo > > output: > network_data: <% $.network_data %> > > tags: > - tripleo-common-managed > > tasks: > validate_input: > description: > > validate the format of input (input includes required fields for > each network) > workflow: validate_networks_input > input: > networks: <% $.networks %> > on-success: validate_network_files > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > validate_network_files: > description: > > validate that Network names exist in Swift container > workflow: tripleo.plan_management.v1.validate_network_files > input: > container: <% $.container %> > network_data: <% $.networks %> > queue_name: <% $.queue_name %> > publish: > network_data: <% task().network_data %> > on-success: get_available_networks > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > get_available_networks: > workflow: tripleo.plan_management.v1.list_available_networks > input: > container: <% $.container %> > queue_name: <% $.queue_name %> > publish: > available_networks: <% task().result.available_networks %> > on-success: get_current_networks > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > get_current_networks: > workflow: tripleo.plan_management.v1.get_network_data > input: > container: <% $.container %> > network_data_file: <% $.network_data_file %> > queue_name: <% $.queue_name %> > publish: > current_networks: <% task().result.network_data %> > on-success: update_network_data > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > update_network_data: > description: > > Combine (or replace) the network data > action: tripleo.plan.update_networks > input: > networks: <% $.available_networks %> > current_networks: <% $.current_networks %> > remove_all: false > publish: > new_network_data: <% task().result.network_data %> > on-success: update_network_data_in_swift > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > update_network_data_in_swift: > description: > > update network_data.yaml object in Swift with data from workflow input > action: swift.put_object > input: > container: <% $.container %> > obj: <% $.network_data_file %> > contents: <% yaml_dump($.new_network_data) %> > on-success: regenerate_templates > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > regenerate_templates: > action: tripleo.templates.process container=<% $.container %> > on-success: get_networks > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > get_networks: > description: > > run GetNetworksAction to get updated contents of network_data.yaml and > provide it as output > workflow: tripleo.plan_management.v1.get_network_data > input: > container: <% $.container %> > network_data_file: <% $.network_data_file %> > queue_name: <% $.queue_name %> > publish: > network_data: <% task().network_data %> > on-success: set_status_success > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > set_status_success: > on-success: notify_zaqar > publish: > status: SUCCESS > message: <% task(get_networks).result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.networks.v1.update_networks > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 09:56:45,520 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 6800 >2018-06-26 09:56:45,521 DEBUG: RESP: [201] Content-Length: 6800 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:45 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.networks.v1\ndescription: TripleO Overcloud Networks Workflows v1\n\nworkflows:\n\n validate_networks_input:\n description: >\n Validate that required fields are present.\n\n input:\n - networks\n - queue_name: tripleo\n\n output:\n result: <% task(validate_network_names).result %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n validate_network_names:\n publish:\n network_name_present: <% $.networks.all($.containsKey('name')) %>\n on-success:\n - set_status_success: <% $.network_name_present = true %>\n - set_status_error: <% $.network_name_present = false %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(validate_network_names).result %>\n\n set_status_error:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: \"One or more entries did not contain the required field 'name'\"\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.networks.v1.validate_networks_input\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n update_networks:\n description: >\n Takes data in networks parameter in json format, validates its contents,\n and persists them in network_data.yaml. After successful update,\n templates are regenerated.\n\n input:\n - container: overcloud\n - networks\n - network_data_file: 'network_data.yaml'\n - queue_name: tripleo\n\n output:\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n validate_input:\n description: >\n validate the format of input (input includes required fields for\n each network)\n workflow: validate_networks_input\n input:\n networks: <% $.networks %>\n on-success: validate_network_files\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n validate_network_files:\n description: >\n validate that Network names exist in Swift container\n workflow: tripleo.plan_management.v1.validate_network_files\n input:\n container: <% $.container %>\n network_data: <% $.networks %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().network_data %>\n on-success: get_available_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n get_available_networks:\n workflow: tripleo.plan_management.v1.list_available_networks\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n publish:\n available_networks: <% task().result.available_networks %>\n on-success: get_current_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n get_current_networks:\n workflow: tripleo.plan_management.v1.get_network_data\n input:\n container: <% $.container %>\n network_data_file: <% $.network_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n current_networks: <% task().result.network_data %>\n on-success: update_network_data\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n update_network_data:\n description: >\n Combine (or replace) the network data\n action: tripleo.plan.update_networks\n input:\n networks: <% $.available_networks %>\n current_networks: <% $.current_networks %>\n remove_all: false\n publish:\n new_network_data: <% task().result.network_data %>\n on-success: update_network_data_in_swift\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n update_network_data_in_swift:\n description: >\n update network_data.yaml object in Swift with data from workflow input\n action: swift.put_object\n input:\n container: <% $.container %>\n obj: <% $.network_data_file %>\n contents: <% yaml_dump($.new_network_data) %>\n on-success: regenerate_templates\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n regenerate_templates:\n action: tripleo.templates.process container=<% $.container %>\n on-success: get_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n get_networks:\n description: >\n run GetNetworksAction to get updated contents of network_data.yaml and\n provide it as output\n workflow: tripleo.plan_management.v1.get_network_data\n input:\n container: <% $.container %>\n network_data_file: <% $.network_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().network_data %>\n on-success: set_status_success\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(get_networks).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.networks.v1.update_networks\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.networks.v1", "tags": [], "created_at": "2018-06-26 04:26:45", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "ef92c55f-7d92-4808-bc2d-88757ba27c36"} > >2018-06-26 09:56:45,521 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:45,522 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.package_update.v1 >description: TripleO update workflows > >workflows: > > # Updates a workload cloud stack > package_update_plan: > description: Take a container and perform a package update with possible breakpoints > > input: > - container > - container_registry > - ceph_ansible_playbook > - timeout: 240 > - queue_name: tripleo > - skip_deploy_identifier: False > - config_dir: '/tmp/' > > tags: > - tripleo-common-managed > > tasks: > update: > action: tripleo.package_update.update_stack > input: > timeout: <% $.timeout %> > container: <% $.container %> > container_registry: <% $.container_registry %> > ceph_ansible_playbook: <% $.ceph_ansible_playbook %> > on-success: clean_plan > on-error: set_update_failed > > clean_plan: > action: tripleo.plan.update_plan_environment > input: > container: <% $.container %> > parameter: CephAnsiblePlaybook > env_key: parameter_defaults > delete: true > on-success: send_message > on-error: set_update_failed > > > set_update_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(update).result %> > > send_message: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.package_update.v1.package_update_plan > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > get_config: > input: > - container > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > get_config: > action: tripleo.config.get_overcloud_config container=<% $.container %> > publish: > status: SUCCESS > message: <% task().result %> > publish-on-error: > status: FAILED > message: Init Minor update failed > on-complete: send_message > > send_message: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.package_update.v1.package_update_plan > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > update_nodes: > description: Take a container and perform an update nodes by nodes > > input: > - node_user: heat-admin > - nodes > - playbook > - inventory_file > - ansible_queue_name: tripleo > - module_path: /usr/share/ansible-modules > - ansible_extra_env_variables: > ANSIBLE_LOG_PATH: /var/log/mistral/package_update.log > ANSIBLE_HOST_KEY_CHECKING: 'False' > - verbosity: 1 > - work_dir: /var/lib/mistral > - skip_tags: '' > > tags: > - tripleo-common-managed > > tasks: > download_config: > action: tripleo.config.download_config > input: > work_dir: <% $.work_dir %>/<% execution().id %> > on-success: get_private_key > on-error: node_update_failed > > get_private_key: > action: tripleo.validations.get_privkey > publish: > private_key: <% task().result %> > on-success: node_update > > node_update: > action: tripleo.ansible-playbook > input: > inventory: <% $.inventory_file %> > playbook: <% $.work_dir %>/<% execution().id %>/<% $.playbook %> > remote_user: <% $.node_user %> > become: true > become_user: root > verbosity: <% $.verbosity %> > ssh_private_key: <% $.private_key %> > extra_env_variables: <% $.ansible_extra_env_variables %> > limit_hosts: <% $.nodes %> > module_path: <% $.module_path %> > queue_name: <% $.ansible_queue_name %> > execution_id: <% execution().id %> > skip_tags: <% $.skip_tags %> > trash_output: true > on-success: > - node_update_passed: <% task().result.returncode = 0 %> > - node_update_failed: <% task().result.returncode != 0 %> > on-error: node_update_failed > publish: > output: <% task().result %> > > node_update_passed: > on-success: notify_zaqar > publish: > status: SUCCESS > message: Updated nodes - <% $.nodes %> > > node_update_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: Failed to update nodes - <% $.nodes %>, please see the logs. > > notify_zaqar: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.ansible_queue_name %> > messages: > body: > type: tripleo.package_update.v1.update_nodes > payload: > status: <% $.status %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > update_converge_plan: > description: Take a container and perform the converge for minor update > > input: > - container > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > remove_noop: > action: tripleo.plan.remove_noop_deploystep > input: > container: <% $.container %> > on-success: send_message > on-error: set_update_failed > > set_update_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(remove_noop).result %> > > send_message: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.package_update.v1.update_converge_plan > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > converge_upgrade_plan: > description: Take a container and perform the converge step of a major upgrade > > input: > - container > - timeout: 240 > - queue_name: tripleo > - skip_deploy_identifier: False > > tags: > - tripleo-common-managed > > tasks: > remove_noop: > action: tripleo.plan.remove_noop_deploystep > input: > container: <% $.container %> > on-success: send_message > on-error: set_update_failed > > set_update_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(upgrade_converge).result %> > > send_message: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.major_upgrade.v1.converge_upgrade_plan > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > ffwd_upgrade_converge_plan: > description: ffwd-upgrade converge removes DeploymentSteps no-op from plan > > input: > - container > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > remove_noop: > action: tripleo.plan.remove_noop_deploystep > input: > container: <% $.container %> > on-success: send_message > on-error: set_update_failed > > set_update_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(remove_noop).result %> > > send_message: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.package_update.v1.ffwd_upgrade_converge_plan > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 09:56:46,034 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 8946 >2018-06-26 09:56:46,035 DEBUG: RESP: [201] Content-Length: 8946 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:46 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.package_update.v1\ndescription: TripleO update workflows\n\nworkflows:\n\n # Updates a workload cloud stack\n package_update_plan:\n description: Take a container and perform a package update with possible breakpoints\n\n input:\n - container\n - container_registry\n - ceph_ansible_playbook\n - timeout: 240\n - queue_name: tripleo\n - skip_deploy_identifier: False\n - config_dir: '/tmp/'\n\n tags:\n - tripleo-common-managed\n\n tasks:\n update:\n action: tripleo.package_update.update_stack\n input:\n timeout: <% $.timeout %>\n container: <% $.container %>\n container_registry: <% $.container_registry %>\n ceph_ansible_playbook: <% $.ceph_ansible_playbook %>\n on-success: clean_plan\n on-error: set_update_failed\n\n clean_plan:\n action: tripleo.plan.update_plan_environment\n input:\n container: <% $.container %>\n parameter: CephAnsiblePlaybook\n env_key: parameter_defaults\n delete: true\n on-success: send_message\n on-error: set_update_failed\n\n\n set_update_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(update).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.package_update_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n get_config:\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_config:\n action: tripleo.config.get_overcloud_config container=<% $.container %>\n publish:\n status: SUCCESS\n message: <% task().result %>\n publish-on-error:\n status: FAILED\n message: Init Minor update failed\n on-complete: send_message\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.package_update_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n update_nodes:\n description: Take a container and perform an update nodes by nodes\n\n input:\n - node_user: heat-admin\n - nodes\n - playbook\n - inventory_file\n - ansible_queue_name: tripleo\n - module_path: /usr/share/ansible-modules\n - ansible_extra_env_variables:\n ANSIBLE_LOG_PATH: /var/log/mistral/package_update.log\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n - verbosity: 1\n - work_dir: /var/lib/mistral\n - skip_tags: ''\n\n tags:\n - tripleo-common-managed\n\n tasks:\n download_config:\n action: tripleo.config.download_config\n input:\n work_dir: <% $.work_dir %>/<% execution().id %>\n on-success: get_private_key\n on-error: node_update_failed\n\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: node_update\n\n node_update:\n action: tripleo.ansible-playbook\n input:\n inventory: <% $.inventory_file %>\n playbook: <% $.work_dir %>/<% execution().id %>/<% $.playbook %>\n remote_user: <% $.node_user %>\n become: true\n become_user: root\n verbosity: <% $.verbosity %>\n ssh_private_key: <% $.private_key %>\n extra_env_variables: <% $.ansible_extra_env_variables %>\n limit_hosts: <% $.nodes %>\n module_path: <% $.module_path %>\n queue_name: <% $.ansible_queue_name %>\n execution_id: <% execution().id %>\n skip_tags: <% $.skip_tags %>\n trash_output: true\n on-success:\n - node_update_passed: <% task().result.returncode = 0 %>\n - node_update_failed: <% task().result.returncode != 0 %>\n on-error: node_update_failed\n publish:\n output: <% task().result %>\n\n node_update_passed:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: Updated nodes - <% $.nodes %>\n\n node_update_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: Failed to update nodes - <% $.nodes %>, please see the logs.\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.ansible_queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.update_nodes\n payload:\n status: <% $.status %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n update_converge_plan:\n description: Take a container and perform the converge for minor update\n\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n remove_noop:\n action: tripleo.plan.remove_noop_deploystep\n input:\n container: <% $.container %>\n on-success: send_message\n on-error: set_update_failed\n\n set_update_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(remove_noop).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.update_converge_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n converge_upgrade_plan:\n description: Take a container and perform the converge step of a major upgrade\n\n input:\n - container\n - timeout: 240\n - queue_name: tripleo\n - skip_deploy_identifier: False\n\n tags:\n - tripleo-common-managed\n\n tasks:\n remove_noop:\n action: tripleo.plan.remove_noop_deploystep\n input:\n container: <% $.container %>\n on-success: send_message\n on-error: set_update_failed\n\n set_update_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(upgrade_converge).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.major_upgrade.v1.converge_upgrade_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n ffwd_upgrade_converge_plan:\n description: ffwd-upgrade converge removes DeploymentSteps no-op from plan\n\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n remove_noop:\n action: tripleo.plan.remove_noop_deploystep\n input:\n container: <% $.container %>\n on-success: send_message\n on-error: set_update_failed\n\n set_update_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(remove_noop).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.ffwd_upgrade_converge_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.package_update.v1", "tags": [], "created_at": "2018-06-26 04:26:45", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "fc9b15c1-cc31-428d-8ad6-3c6babac7f28"} > >2018-06-26 09:56:46,036 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:46,147 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.undercloud_backup.v1 >description: TripleO Undercloud backup workflows > >workflows: > > backup: > description: This workflow will launch the Undercloud backup > tags: > - tripleo-common-managed > input: > - sources_path: '/home/stack/' > - queue_name: tripleo > tasks: > # Action to know if there is enough available space > # to run the Undercloud backup > get_free_space: > action: tripleo.undercloud.get_free_space > publish: > status: SUCCESS > message: <% task().result %> > free_space: <% task().result %> > on-success: create_backup_dir > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > # We create a temp directory to store the Undercloud > # backup > create_backup_dir: > action: tripleo.undercloud.create_backup_dir > publish: > status: SUCCESS > message: <% task().result %> > backup_path: <% task().result %> > on-success: get_database_credentials > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > # The Undercloud database password for the root > # user is stored in a Mistral environment, we > # need the password in order to run the database dump > get_database_credentials: > action: mistral.environments_get name='tripleo.undercloud-config' > publish: > status: SUCCESS > message: <% task().result %> > undercloud_db_password: <% task(get_database_credentials).result.variables.undercloud_db_password %> > on-success: create_database_backup > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > # Run the DB dump of all the databases and store the result > # in the temporary folder > create_database_backup: > input: > path: <% $.backup_path.path %> > dbuser: root > dbpassword: <% $.undercloud_db_password %> > action: tripleo.undercloud.create_database_backup > publish: > status: SUCCESS > message: <% task().result %> > on-success: create_fs_backup > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > # This action will run the fs backup > create_fs_backup: > input: > sources_path: <% $.sources_path %> > path: <% $.backup_path.path %> > action: tripleo.undercloud.create_file_system_backup > publish: > status: SUCCESS > message: <% task().result %> > on-success: upload_backup > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > # This action will push the backup to swift > upload_backup: > input: > backup_path: <% $.backup_path.path %> > action: tripleo.undercloud.upload_backup_to_swift > publish: > status: SUCCESS > message: <% task().result %> > on-success: cleanup_backup > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > # This action will remove the backup temp folder > cleanup_backup: > input: > path: <% $.backup_path.path %> > action: tripleo.undercloud.remove_temp_dir > publish: > status: SUCCESS > message: <% task().result %> > on-success: send_message > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > # Sending a message to show that the backup finished > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.undercloud_backup.v1.launch > payload: > status: <% $.get('status', 'SUCCESS') %> > execution: <% execution() %> > message: <% $.get('message', '') %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 09:56:46,360 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 4669 >2018-06-26 09:56:46,360 DEBUG: RESP: [201] Content-Length: 4669 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:46 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.undercloud_backup.v1\ndescription: TripleO Undercloud backup workflows\n\nworkflows:\n\n backup:\n description: This workflow will launch the Undercloud backup\n tags:\n - tripleo-common-managed\n input:\n - sources_path: '/home/stack/'\n - queue_name: tripleo\n tasks:\n # Action to know if there is enough available space\n # to run the Undercloud backup\n get_free_space:\n action: tripleo.undercloud.get_free_space\n publish:\n status: SUCCESS\n message: <% task().result %>\n free_space: <% task().result %>\n on-success: create_backup_dir\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # We create a temp directory to store the Undercloud\n # backup\n create_backup_dir:\n action: tripleo.undercloud.create_backup_dir\n publish:\n status: SUCCESS\n message: <% task().result %>\n backup_path: <% task().result %>\n on-success: get_database_credentials\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # The Undercloud database password for the root\n # user is stored in a Mistral environment, we\n # need the password in order to run the database dump\n get_database_credentials:\n action: mistral.environments_get name='tripleo.undercloud-config'\n publish:\n status: SUCCESS\n message: <% task().result %>\n undercloud_db_password: <% task(get_database_credentials).result.variables.undercloud_db_password %>\n on-success: create_database_backup\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # Run the DB dump of all the databases and store the result\n # in the temporary folder\n create_database_backup:\n input:\n path: <% $.backup_path.path %>\n dbuser: root\n dbpassword: <% $.undercloud_db_password %>\n action: tripleo.undercloud.create_database_backup\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: create_fs_backup\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # This action will run the fs backup\n create_fs_backup:\n input:\n sources_path: <% $.sources_path %>\n path: <% $.backup_path.path %>\n action: tripleo.undercloud.create_file_system_backup\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: upload_backup\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # This action will push the backup to swift\n upload_backup:\n input:\n backup_path: <% $.backup_path.path %>\n action: tripleo.undercloud.upload_backup_to_swift\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: cleanup_backup\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # This action will remove the backup temp folder\n cleanup_backup:\n input:\n path: <% $.backup_path.path %>\n action: tripleo.undercloud.remove_temp_dir\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: send_message\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # Sending a message to show that the backup finished\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.undercloud_backup.v1.launch\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n execution: <% execution() %>\n message: <% $.get('message', '') %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.undercloud_backup.v1", "tags": [], "created_at": "2018-06-26 04:26:46", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "fed5ce09-f486-43a5-81f8-2924993d83eb"} > >2018-06-26 09:56:46,360 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:46,361 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.skydive_ansible.v1 >description: TripleO manages Skydive with skydive-ansible > >workflows: > skydive_install: > # allows for additional extra_vars via workflow input > input: > - ansible_playbook_verbosity: 0 > - ansible_extra_env_variables: > ANSIBLE_ROLES_PATH: /usr/share/skydive-ansible/roles/ > ANSIBLE_RETRY_FILES_ENABLED: 'False' > ANSIBLE_LOG_PATH: /var/log/mistral/skydive-install-workflow.log > ANSIBLE_HOST_KEY_CHECKING: 'False' > - skydive_ansible_extra_vars: {} > - skydive_ansible_playbook: /usr/share/skydive-ansible/playbook.yml.sample > tags: > - tripleo-common-managed > tasks: > set_blacklisted_ips: > publish: > blacklisted_ips: <% env().get('blacklisted_ip_addresses', []) %> > on-success: set_ip_lists > set_ip_lists: > publish: > agent_ips: <% let(root => $) -> env().get('service_ips', {}).get('skydive_agent_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > analyzer_ips: <% let(root => $) -> env().get('service_ips', {}).get('skydive_analyzer_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > on-success: enable_ssh_admin > enable_ssh_admin: > workflow: tripleo.access.v1.enable_ssh_admin > input: > ssh_servers: <% ($.agent_ips + $.analyzer_ips).toSet() %> > on-success: get_private_key > get_private_key: > action: tripleo.validations.get_privkey > publish: > private_key: <% task().result %> > on-success: set_fork_count > set_fork_count: > publish: # unique list of all IPs: make each list a set, take unions and count > fork_count: <% min($.agent_ips.toSet().union($.analyzer_ips.toSet()).count(), 100) %> # don't use >100 forks > on-success: set_role_vars > set_role_vars: > publish: > # NOTE(sbaubeau): collect role settings from all tht roles > agent_vars: <% env().get('role_merged_configs', {}).values().select($.get('skydive_agent_ansible_vars', {})).aggregate($1 + $2) %> > analyzer_vars: <% env().get('role_merged_configs', {}).values().select($.get('skydive_analyzer_ansible_vars', {})).aggregate($1 + $2) %> > on-success: build_extra_vars > build_extra_vars: > publish: > # NOTE(sbaubeau): merge vars from all ansible roles > extra_vars: <% $.agent_vars + $.analyzer_vars + $.skydive_ansible_extra_vars %> > on-success: skydive_install > skydive_install: > action: tripleo.ansible-playbook > input: > inventory: > agents: > hosts: <% $.agent_ips.toDict($, {}) %> > analyzers: > hosts: <% $.analyzer_ips.toDict($, {}) %> > playbook: <% $.skydive_ansible_playbook %> > remote_user: tripleo-admin > become: true > become_user: root > verbosity: <% $.ansible_playbook_verbosity %> > forks: <% $.fork_count %> > ssh_private_key: <% $.private_key %> > extra_env_variables: <% $.ansible_extra_env_variables %> > extra_vars: <% $.extra_vars %> > publish: > output: <% task().result %> >' >2018-06-26 09:56:46,523 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 3507 >2018-06-26 09:56:46,524 DEBUG: RESP: [201] Content-Length: 3507 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:46 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.skydive_ansible.v1\ndescription: TripleO manages Skydive with skydive-ansible\n\nworkflows:\n skydive_install:\n # allows for additional extra_vars via workflow input\n input:\n - ansible_playbook_verbosity: 0\n - ansible_extra_env_variables:\n ANSIBLE_ROLES_PATH: /usr/share/skydive-ansible/roles/\n ANSIBLE_RETRY_FILES_ENABLED: 'False'\n ANSIBLE_LOG_PATH: /var/log/mistral/skydive-install-workflow.log\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n - skydive_ansible_extra_vars: {}\n - skydive_ansible_playbook: /usr/share/skydive-ansible/playbook.yml.sample\n tags:\n - tripleo-common-managed\n tasks:\n set_blacklisted_ips:\n publish:\n blacklisted_ips: <% env().get('blacklisted_ip_addresses', []) %>\n on-success: set_ip_lists\n set_ip_lists:\n publish:\n agent_ips: <% let(root => $) -> env().get('service_ips', {}).get('skydive_agent_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n analyzer_ips: <% let(root => $) -> env().get('service_ips', {}).get('skydive_analyzer_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n on-success: enable_ssh_admin\n enable_ssh_admin:\n workflow: tripleo.access.v1.enable_ssh_admin\n input:\n ssh_servers: <% ($.agent_ips + $.analyzer_ips).toSet() %>\n on-success: get_private_key\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: set_fork_count\n set_fork_count:\n publish: # unique list of all IPs: make each list a set, take unions and count\n fork_count: <% min($.agent_ips.toSet().union($.analyzer_ips.toSet()).count(), 100) %> # don't use >100 forks\n on-success: set_role_vars\n set_role_vars:\n publish:\n # NOTE(sbaubeau): collect role settings from all tht roles\n agent_vars: <% env().get('role_merged_configs', {}).values().select($.get('skydive_agent_ansible_vars', {})).aggregate($1 + $2) %>\n analyzer_vars: <% env().get('role_merged_configs', {}).values().select($.get('skydive_analyzer_ansible_vars', {})).aggregate($1 + $2) %>\n on-success: build_extra_vars\n build_extra_vars:\n publish:\n # NOTE(sbaubeau): merge vars from all ansible roles\n extra_vars: <% $.agent_vars + $.analyzer_vars + $.skydive_ansible_extra_vars %>\n on-success: skydive_install\n skydive_install:\n action: tripleo.ansible-playbook\n input:\n inventory:\n agents:\n hosts: <% $.agent_ips.toDict($, {}) %>\n analyzers:\n hosts: <% $.analyzer_ips.toDict($, {}) %>\n playbook: <% $.skydive_ansible_playbook %>\n remote_user: tripleo-admin\n become: true\n become_user: root\n verbosity: <% $.ansible_playbook_verbosity %>\n forks: <% $.fork_count %>\n ssh_private_key: <% $.private_key %>\n extra_env_variables: <% $.ansible_extra_env_variables %>\n extra_vars: <% $.extra_vars %>\n publish:\n output: <% task().result %>\n", "name": "tripleo.skydive_ansible.v1", "tags": [], "created_at": "2018-06-26 04:26:46", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "c219fcf4-2cd3-45c9-8383-f5865917e2c2"} > >2018-06-26 09:56:46,524 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:46,525 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.derive_params.v1 >description: TripleO Workflows to derive deployment parameters from the introspected data > >workflows: > > derive_parameters: > description: The main workflow for deriving parameters from the introspected data > > input: > - plan: overcloud > - queue_name: tripleo > - user_inputs: {} > > tags: > - tripleo-common-managed > > tasks: > get_flattened_parameters: > action: tripleo.parameters.get_flatten container=<% $.plan %> > publish: > environment_parameters: <% task().result.environment_parameters %> > heat_resource_tree: <% task().result.heat_resource_tree %> > on-success: > - get_roles: <% $.environment_parameters and $.heat_resource_tree %> > - set_status_failed_get_flattened_parameters: <% (not $.environment_parameters) or (not $.heat_resource_tree) %> > on-error: set_status_failed_get_flattened_parameters > > get_roles: > action: tripleo.role.list container=<% $.plan %> > publish: > role_name_list: <% task().result %> > on-success: > - get_valid_roles: <% $.role_name_list %> > - set_status_failed_get_roles: <% not $.role_name_list %> > on-error: set_status_failed_on_error_get_roles > > # Obtain only the roles which has count > 0, by checking <RoleName>Count parameter, like ComputeCount > get_valid_roles: > publish: > valid_role_name_list: <% let(hr => $.heat_resource_tree.parameters) -> $.role_name_list.where(int($hr.get(concat($, 'Count'), {}).get('default', 0)) > 0) %> > on-success: > - for_each_role: <% $.valid_role_name_list %> > - set_status_failed_get_valid_roles: <% not $.valid_role_name_list %> > > # Execute the basic preparation workflow for each role to get introspection data > for_each_role: > with-items: role_name in <% $.valid_role_name_list %> > concurrency: 1 > workflow: _derive_parameters_per_role > input: > plan: <% $.plan %> > role_name: <% $.role_name %> > environment_parameters: <% $.environment_parameters %> > heat_resource_tree: <% $.heat_resource_tree %> > user_inputs: <% $.user_inputs %> > publish: > # Gets all the roles derived parameters as dictionary > result: <% task().result.select($.get('derived_parameters', {})).sum() %> > on-success: reset_derive_parameters_in_plan > on-error: set_status_failed_for_each_role > > reset_derive_parameters_in_plan: > action: tripleo.parameters.reset > input: > container: <% $.plan %> > key: 'derived_parameters' > on-success: > # Add the derived parameters to the deployment plan only when $.result > # (the derived parameters) is non-empty. Otherwise, we're done. > - update_derive_parameters_in_plan: <% $.result %> > - send_message: <% not $.result %> > on-error: set_status_failed_reset_derive_parameters_in_plan > > update_derive_parameters_in_plan: > action: tripleo.parameters.update > input: > container: <% $.plan %> > key: 'derived_parameters' > parameters: <% $.get('result', {}) %> > on-success: send_message > on-error: set_status_failed_update_derive_parameters_in_plan > > set_status_failed_get_flattened_parameters: > on-success: send_message > publish: > status: FAILED > message: <% task(get_flattened_parameters).result %> > > set_status_failed_get_roles: > on-success: send_message > publish: > status: FAILED > message: "Unable to determine the list of roles in the deployment plan" > > set_status_failed_on_error_get_roles: > on-success: send_message > publish: > status: FAILED > message: <% task(get_roles).result %> > > set_status_failed_get_valid_roles: > on-success: send_message > publish: > status: FAILED > message: 'Unable to determine the list of valid roles in the deployment plan.' > > set_status_failed_for_each_role: > on-success: update_message_format > publish: > status: FAILED > # gets the status and message for all roles from task result. > message: <% task(for_each_role).result.select(dict('role_name' => $.role_name, 'status' => $.get('status', 'SUCCESS'), 'message' => $.get('message', ''))) %> > > update_message_format: > on-success: send_message > publish: > # updates the message format(Role 'role name': message) for each roles which are failed and joins the message list as string with ', ' separator. > message: <% $.message.where($.get('status', 'SUCCESS') != 'SUCCESS').select(concat("Role '{}':".format($.role_name), " ", $.get('message', '(error unknown)'))).join(', ') %> > > set_status_failed_reset_derive_parameters_in_plan: > on-success: send_message > publish: > status: FAILED > message: <% task(reset_derive_parameters_in_plan).result %> > > set_status_failed_update_derive_parameters_in_plan: > on-success: send_message > publish: > status: FAILED > message: <% task(update_derive_parameters_in_plan).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.derive_params.v1.derive_parameters > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > result: <% $.get('result', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = 'FAILED' %> > > > _derive_parameters_per_role: > description: > > Workflow which runs per role to get the introspection data on the first matching node assigned to role. > Once introspection data is fetched, this worklow will trigger the actual derive parameters workflow > input: > - plan > - role_name > - environment_parameters > - heat_resource_tree > - user_inputs > > output: > derived_parameters: <% $.get('derived_parameters', {}) %> > # Need role_name in output parameter to display the status for all roles in main workflow when any role fails here. > role_name: <% $.role_name %> > > tags: > - tripleo-common-managed > > tasks: > get_role_info: > workflow: _get_role_info > input: > role_name: <% $.role_name %> > heat_resource_tree: <% $.heat_resource_tree %> > publish: > role_features: <% task().result.get('role_features', []) %> > role_services: <% task().result.get('role_services', []) %> > on-success: > # Continue only if there are features associated with this role. Otherwise, we're done. > - get_flavor_name: <% $.role_features %> > on-error: set_status_failed_get_role_info > > # Getting introspection data workflow, which will take care of > # 1) profile and flavor based mapping > # 2) Nova placement api based mapping > # Currently we have implemented profile and flavor based mapping > # TODO-Nova placement api based mapping is pending, we will enchance it later. > get_flavor_name: > publish: > flavor_name: <% let(param_name => concat('Overcloud', $.role_name, 'Flavor').replace('OvercloudControllerFlavor', 'OvercloudControlFlavor')) -> $.heat_resource_tree.parameters.get($param_name, {}).get('default', '') %> > on-success: > - get_profile_name: <% $.flavor_name %> > - set_status_failed_get_flavor_name: <% not $.flavor_name %> > > get_profile_name: > action: tripleo.parameters.get_profile_of_flavor flavor_name=<% $.flavor_name %> > publish: > profile_name: <% task().result %> > on-success: get_profile_node > on-error: set_status_failed_get_profile_name > > get_profile_node: > workflow: tripleo.baremetal.v1.nodes_with_profile > input: > profile: <% $.profile_name %> > publish: > profile_node_uuid: <% task().result.matching_nodes.first('') %> > on-success: > - get_introspection_data: <% $.profile_node_uuid %> > - set_status_failed_no_matching_node_get_profile_node: <% not $.profile_node_uuid %> > on-error: set_status_failed_on_error_get_profile_node > > get_introspection_data: > action: baremetal_introspection.get_data uuid=<% $.profile_node_uuid %> > publish: > hw_data: <% task().result %> > # Establish an empty dictionary of derived_parameters prior to > # invoking the individual "feature" algorithms > derived_parameters: <% dict() %> > on-success: handle_dpdk_feature > on-error: set_status_failed_get_introspection_data > > handle_dpdk_feature: > on-success: > - get_dpdk_derive_params: <% $.role_features.contains('DPDK') %> > - handle_sriov_feature: <% not $.role_features.contains('DPDK') %> > > get_dpdk_derive_params: > workflow: tripleo.derive_params_formulas.v1.dpdk_derive_params > input: > plan: <% $.plan %> > role_name: <% $.role_name %> > hw_data: <% $.hw_data %> > user_inputs: <% $.user_inputs %> > publish: > derived_parameters: <% task().result.get('derived_parameters', {}) %> > on-success: handle_sriov_feature > on-error: set_status_failed_get_dpdk_derive_params > > handle_sriov_feature: > on-success: > - get_sriov_derive_params: <% $.role_features.contains('SRIOV') %> > - handle_host_feature: <% not $.role_features.contains('SRIOV') %> > > get_sriov_derive_params: > workflow: tripleo.derive_params_formulas.v1.sriov_derive_params > input: > role_name: <% $.role_name %> > hw_data: <% $.hw_data %> > derived_parameters: <% $.derived_parameters %> > publish: > derived_parameters: <% task().result.get('derived_parameters', {}) %> > on-success: handle_host_feature > on-error: set_status_failed_get_sriov_derive_params > > handle_host_feature: > on-success: > - get_host_derive_params: <% $.role_features.contains('HOST') %> > - handle_hci_feature: <% not $.role_features.contains('HOST') %> > > get_host_derive_params: > workflow: tripleo.derive_params_formulas.v1.host_derive_params > input: > role_name: <% $.role_name %> > hw_data: <% $.hw_data %> > user_inputs: <% $.user_inputs %> > derived_parameters: <% $.derived_parameters %> > publish: > derived_parameters: <% task().result.get('derived_parameters', {}) %> > on-success: handle_hci_feature > on-error: set_status_failed_get_host_derive_params > > handle_hci_feature: > on-success: > - get_hci_derive_params: <% $.role_features.contains('HCI') %> > > get_hci_derive_params: > workflow: tripleo.derive_params_formulas.v1.hci_derive_params > input: > role_name: <% $.role_name %> > environment_parameters: <% $.environment_parameters %> > heat_resource_tree: <% $.heat_resource_tree %> > introspection_data: <% $.hw_data %> > user_inputs: <% $.user_inputs %> > derived_parameters: <% $.derived_parameters %> > publish: > derived_parameters: <% task().result.get('derived_parameters', {}) %> > on-error: set_status_failed_get_hci_derive_params > # Done (no more derived parameter features) > > set_status_failed_get_role_info: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% task(get_role_info).result.get('message', '') %> > on-success: fail > > set_status_failed_get_flavor_name: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% "Unable to determine flavor for role '{0}'".format($.role_name) %> > on-success: fail > > set_status_failed_get_profile_name: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% task(get_profile_name).result %> > on-success: fail > > set_status_failed_no_matching_node_get_profile_node: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% "Unable to determine matching node for profile '{0}'".format($.profile_name) %> > on-success: fail > > set_status_failed_on_error_get_profile_node: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% task(get_profile_node).result %> > on-success: fail > > set_status_failed_get_introspection_data: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% task(get_introspection_data).result %> > on-success: fail > > set_status_failed_get_dpdk_derive_params: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% task(get_dpdk_derive_params).result %> > on-success: fail > > set_status_failed_get_sriov_derive_params: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% task(get_sriov_derive_params).result %> > on-success: fail > > set_status_failed_get_host_derive_params: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% task(get_host_derive_params).result %> > on-success: fail > > set_status_failed_get_hci_derive_params: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% task(get_hci_derive_params).result %> > on-success: fail > > > _get_role_info: > description: > > Workflow that determines the list of derived parameter features (DPDK, > HCI, etc.) for a role based on the services assigned to the role. > > input: > - role_name > - heat_resource_tree > > tags: > - tripleo-common-managed > > tasks: > get_resource_chains: > publish: > resource_chains: <% $.heat_resource_tree.resources.values().where($.get('type', '') = 'OS::Heat::ResourceChain') %> > on-success: > - get_role_chain: <% $.resource_chains %> > - set_status_failed_get_resource_chains: <% not $.resource_chains %> > > get_role_chain: > publish: > role_chain: <% let(chain_name => concat($.role_name, 'ServiceChain'))-> $.heat_resource_tree.resources.values().where($.name = $chain_name).first({}) %> > on-success: > - get_service_chain: <% $.role_chain %> > - set_status_failed_get_role_chain: <% not $.role_chain %> > > get_service_chain: > publish: > service_chain: <% let(resources => $.role_chain.resources)-> $.resource_chains.where($resources.contains($.id)).first('') %> > on-success: > - get_role_services: <% $.service_chain %> > - set_status_failed_get_service_chain: <% not $.service_chain %> > > get_role_services: > publish: > role_services: <% let(resources => $.heat_resource_tree.resources)-> $.service_chain.resources.select($resources.get($)) %> > on-success: > - check_features: <% $.role_services %> > - set_status_failed_get_role_services: <% not $.role_services %> > > check_features: > on-success: build_feature_dict > publish: > # The role supports the DPDK feature if the NeutronDatapathType parameter is present > dpdk: <% let(resources => $.heat_resource_tree.resources) -> $.role_services.any($.get('parameters', []).contains('NeutronDatapathType') or $.get('resources', []).select($resources.get($)).any($.get('parameters', []).contains('NeutronDatapathType'))) %> > > # The role supports the DPDK feature in ODL if the OvsEnableDpdk parameter value is true in role parameters. > odl_dpdk: <% let(role => $.role_name) -> $.heat_resource_tree.parameters.get(concat($role, 'Parameters'), {}).get('default', {}).get('OvsEnableDpdk', false) %> > > # The role supports the SRIOV feature if it includes NeutronSriovAgent services. > sriov: <% $.role_services.any($.get('type', '').endsWith('::NeutronSriovAgent')) %> > > # The role supports the HCI feature if it includes both NovaCompute and CephOSD services. > hci: <% $.role_services.any($.get('type', '').endsWith('::NovaCompute')) and $.role_services.any($.get('type', '').endsWith('::CephOSD')) %> > > build_feature_dict: > on-success: filter_features > publish: > feature_dict: <% dict(DPDK => ($.dpdk or $.odl_dpdk), SRIOV => $.sriov, HOST => ($.dpdk or $.odl_dpdk or $.sriov), HCI => $.hci) %> > > filter_features: > publish: > # The list of features that are enabled (i.e. are true in the feature_dict). > role_features: <% let(feature_dict => $.feature_dict)-> $feature_dict.keys().where($feature_dict[$]) %> > > set_status_failed_get_resource_chains: > publish: > message: <% 'Unable to locate any resource chains in the heat resource tree' %> > on-success: fail > > set_status_failed_get_role_chain: > publish: > message: <% "Unable to determine the service chain resource for role '{0}'".format($.role_name) %> > on-success: fail > > set_status_failed_get_service_chain: > publish: > message: <% "Unable to determine the service chain for role '{0}'".format($.role_name) %> > on-success: fail > > set_status_failed_get_role_services: > publish: > message: <% "Unable to determine list of services for role '{0}'".format($.role_name) %> > on-success: fail >' >2018-06-26 09:56:47,647 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 18571 >2018-06-26 09:56:47,687 DEBUG: RESP: [201] Content-Length: 18571 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:47 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.derive_params.v1\ndescription: TripleO Workflows to derive deployment parameters from the introspected data\n\nworkflows:\n\n derive_parameters:\n description: The main workflow for deriving parameters from the introspected data\n\n input:\n - plan: overcloud\n - queue_name: tripleo\n - user_inputs: {}\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_flattened_parameters:\n action: tripleo.parameters.get_flatten container=<% $.plan %>\n publish:\n environment_parameters: <% task().result.environment_parameters %>\n heat_resource_tree: <% task().result.heat_resource_tree %>\n on-success:\n - get_roles: <% $.environment_parameters and $.heat_resource_tree %>\n - set_status_failed_get_flattened_parameters: <% (not $.environment_parameters) or (not $.heat_resource_tree) %>\n on-error: set_status_failed_get_flattened_parameters\n\n get_roles:\n action: tripleo.role.list container=<% $.plan %>\n publish:\n role_name_list: <% task().result %>\n on-success:\n - get_valid_roles: <% $.role_name_list %>\n - set_status_failed_get_roles: <% not $.role_name_list %>\n on-error: set_status_failed_on_error_get_roles\n\n # Obtain only the roles which has count > 0, by checking <RoleName>Count parameter, like ComputeCount\n get_valid_roles:\n publish:\n valid_role_name_list: <% let(hr => $.heat_resource_tree.parameters) -> $.role_name_list.where(int($hr.get(concat($, 'Count'), {}).get('default', 0)) > 0) %>\n on-success:\n - for_each_role: <% $.valid_role_name_list %>\n - set_status_failed_get_valid_roles: <% not $.valid_role_name_list %>\n\n # Execute the basic preparation workflow for each role to get introspection data\n for_each_role:\n with-items: role_name in <% $.valid_role_name_list %>\n concurrency: 1\n workflow: _derive_parameters_per_role\n input:\n plan: <% $.plan %>\n role_name: <% $.role_name %>\n environment_parameters: <% $.environment_parameters %>\n heat_resource_tree: <% $.heat_resource_tree %>\n user_inputs: <% $.user_inputs %>\n publish:\n # Gets all the roles derived parameters as dictionary\n result: <% task().result.select($.get('derived_parameters', {})).sum() %>\n on-success: reset_derive_parameters_in_plan\n on-error: set_status_failed_for_each_role\n\n reset_derive_parameters_in_plan:\n action: tripleo.parameters.reset\n input:\n container: <% $.plan %>\n key: 'derived_parameters'\n on-success:\n # Add the derived parameters to the deployment plan only when $.result\n # (the derived parameters) is non-empty. Otherwise, we're done.\n - update_derive_parameters_in_plan: <% $.result %>\n - send_message: <% not $.result %>\n on-error: set_status_failed_reset_derive_parameters_in_plan\n\n update_derive_parameters_in_plan:\n action: tripleo.parameters.update\n input:\n container: <% $.plan %>\n key: 'derived_parameters'\n parameters: <% $.get('result', {}) %>\n on-success: send_message\n on-error: set_status_failed_update_derive_parameters_in_plan\n\n set_status_failed_get_flattened_parameters:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_flattened_parameters).result %>\n\n set_status_failed_get_roles:\n on-success: send_message\n publish:\n status: FAILED\n message: \"Unable to determine the list of roles in the deployment plan\"\n\n set_status_failed_on_error_get_roles:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_roles).result %>\n\n set_status_failed_get_valid_roles:\n on-success: send_message\n publish:\n status: FAILED\n message: 'Unable to determine the list of valid roles in the deployment plan.'\n\n set_status_failed_for_each_role:\n on-success: update_message_format\n publish:\n status: FAILED\n # gets the status and message for all roles from task result.\n message: <% task(for_each_role).result.select(dict('role_name' => $.role_name, 'status' => $.get('status', 'SUCCESS'), 'message' => $.get('message', ''))) %>\n\n update_message_format:\n on-success: send_message\n publish:\n # updates the message format(Role 'role name': message) for each roles which are failed and joins the message list as string with ', ' separator.\n message: <% $.message.where($.get('status', 'SUCCESS') != 'SUCCESS').select(concat(\"Role '{}':\".format($.role_name), \" \", $.get('message', '(error unknown)'))).join(', ') %>\n\n set_status_failed_reset_derive_parameters_in_plan:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(reset_derive_parameters_in_plan).result %>\n\n set_status_failed_update_derive_parameters_in_plan:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(update_derive_parameters_in_plan).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.derive_params.v1.derive_parameters\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n result: <% $.get('result', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n\n\n _derive_parameters_per_role:\n description: >\n Workflow which runs per role to get the introspection data on the first matching node assigned to role.\n Once introspection data is fetched, this worklow will trigger the actual derive parameters workflow\n input:\n - plan\n - role_name\n - environment_parameters\n - heat_resource_tree\n - user_inputs\n\n output:\n derived_parameters: <% $.get('derived_parameters', {}) %>\n # Need role_name in output parameter to display the status for all roles in main workflow when any role fails here.\n role_name: <% $.role_name %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_role_info:\n workflow: _get_role_info\n input:\n role_name: <% $.role_name %>\n heat_resource_tree: <% $.heat_resource_tree %>\n publish:\n role_features: <% task().result.get('role_features', []) %>\n role_services: <% task().result.get('role_services', []) %>\n on-success:\n # Continue only if there are features associated with this role. Otherwise, we're done.\n - get_flavor_name: <% $.role_features %>\n on-error: set_status_failed_get_role_info\n\n # Getting introspection data workflow, which will take care of\n # 1) profile and flavor based mapping\n # 2) Nova placement api based mapping\n # Currently we have implemented profile and flavor based mapping\n # TODO-Nova placement api based mapping is pending, we will enchance it later.\n get_flavor_name:\n publish:\n flavor_name: <% let(param_name => concat('Overcloud', $.role_name, 'Flavor').replace('OvercloudControllerFlavor', 'OvercloudControlFlavor')) -> $.heat_resource_tree.parameters.get($param_name, {}).get('default', '') %>\n on-success:\n - get_profile_name: <% $.flavor_name %>\n - set_status_failed_get_flavor_name: <% not $.flavor_name %>\n\n get_profile_name:\n action: tripleo.parameters.get_profile_of_flavor flavor_name=<% $.flavor_name %>\n publish:\n profile_name: <% task().result %>\n on-success: get_profile_node\n on-error: set_status_failed_get_profile_name\n\n get_profile_node:\n workflow: tripleo.baremetal.v1.nodes_with_profile\n input:\n profile: <% $.profile_name %>\n publish:\n profile_node_uuid: <% task().result.matching_nodes.first('') %>\n on-success:\n - get_introspection_data: <% $.profile_node_uuid %>\n - set_status_failed_no_matching_node_get_profile_node: <% not $.profile_node_uuid %>\n on-error: set_status_failed_on_error_get_profile_node\n\n get_introspection_data:\n action: baremetal_introspection.get_data uuid=<% $.profile_node_uuid %>\n publish:\n hw_data: <% task().result %>\n # Establish an empty dictionary of derived_parameters prior to\n # invoking the individual \"feature\" algorithms\n derived_parameters: <% dict() %>\n on-success: handle_dpdk_feature\n on-error: set_status_failed_get_introspection_data\n\n handle_dpdk_feature:\n on-success:\n - get_dpdk_derive_params: <% $.role_features.contains('DPDK') %>\n - handle_sriov_feature: <% not $.role_features.contains('DPDK') %>\n\n get_dpdk_derive_params:\n workflow: tripleo.derive_params_formulas.v1.dpdk_derive_params\n input:\n plan: <% $.plan %>\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n user_inputs: <% $.user_inputs %>\n publish:\n derived_parameters: <% task().result.get('derived_parameters', {}) %>\n on-success: handle_sriov_feature\n on-error: set_status_failed_get_dpdk_derive_params\n\n handle_sriov_feature:\n on-success:\n - get_sriov_derive_params: <% $.role_features.contains('SRIOV') %>\n - handle_host_feature: <% not $.role_features.contains('SRIOV') %>\n\n get_sriov_derive_params:\n workflow: tripleo.derive_params_formulas.v1.sriov_derive_params\n input:\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n derived_parameters: <% $.derived_parameters %>\n publish:\n derived_parameters: <% task().result.get('derived_parameters', {}) %>\n on-success: handle_host_feature\n on-error: set_status_failed_get_sriov_derive_params\n\n handle_host_feature:\n on-success:\n - get_host_derive_params: <% $.role_features.contains('HOST') %>\n - handle_hci_feature: <% not $.role_features.contains('HOST') %>\n\n get_host_derive_params:\n workflow: tripleo.derive_params_formulas.v1.host_derive_params\n input:\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n user_inputs: <% $.user_inputs %>\n derived_parameters: <% $.derived_parameters %>\n publish:\n derived_parameters: <% task().result.get('derived_parameters', {}) %>\n on-success: handle_hci_feature\n on-error: set_status_failed_get_host_derive_params\n\n handle_hci_feature:\n on-success:\n - get_hci_derive_params: <% $.role_features.contains('HCI') %>\n\n get_hci_derive_params:\n workflow: tripleo.derive_params_formulas.v1.hci_derive_params\n input:\n role_name: <% $.role_name %>\n environment_parameters: <% $.environment_parameters %>\n heat_resource_tree: <% $.heat_resource_tree %>\n introspection_data: <% $.hw_data %>\n user_inputs: <% $.user_inputs %>\n derived_parameters: <% $.derived_parameters %>\n publish:\n derived_parameters: <% task().result.get('derived_parameters', {}) %>\n on-error: set_status_failed_get_hci_derive_params\n # Done (no more derived parameter features)\n\n set_status_failed_get_role_info:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_role_info).result.get('message', '') %>\n on-success: fail\n\n set_status_failed_get_flavor_name:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% \"Unable to determine flavor for role '{0}'\".format($.role_name) %>\n on-success: fail\n\n set_status_failed_get_profile_name:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_profile_name).result %>\n on-success: fail\n\n set_status_failed_no_matching_node_get_profile_node:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% \"Unable to determine matching node for profile '{0}'\".format($.profile_name) %>\n on-success: fail\n\n set_status_failed_on_error_get_profile_node:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_profile_node).result %>\n on-success: fail\n\n set_status_failed_get_introspection_data:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_introspection_data).result %>\n on-success: fail\n\n set_status_failed_get_dpdk_derive_params:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_dpdk_derive_params).result %>\n on-success: fail\n\n set_status_failed_get_sriov_derive_params:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_sriov_derive_params).result %>\n on-success: fail\n\n set_status_failed_get_host_derive_params:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_host_derive_params).result %>\n on-success: fail\n\n set_status_failed_get_hci_derive_params:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_hci_derive_params).result %>\n on-success: fail\n\n\n _get_role_info:\n description: >\n Workflow that determines the list of derived parameter features (DPDK,\n HCI, etc.) for a role based on the services assigned to the role.\n\n input:\n - role_name\n - heat_resource_tree\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_resource_chains:\n publish:\n resource_chains: <% $.heat_resource_tree.resources.values().where($.get('type', '') = 'OS::Heat::ResourceChain') %>\n on-success:\n - get_role_chain: <% $.resource_chains %>\n - set_status_failed_get_resource_chains: <% not $.resource_chains %>\n\n get_role_chain:\n publish:\n role_chain: <% let(chain_name => concat($.role_name, 'ServiceChain'))-> $.heat_resource_tree.resources.values().where($.name = $chain_name).first({}) %>\n on-success:\n - get_service_chain: <% $.role_chain %>\n - set_status_failed_get_role_chain: <% not $.role_chain %>\n\n get_service_chain:\n publish:\n service_chain: <% let(resources => $.role_chain.resources)-> $.resource_chains.where($resources.contains($.id)).first('') %>\n on-success:\n - get_role_services: <% $.service_chain %>\n - set_status_failed_get_service_chain: <% not $.service_chain %>\n\n get_role_services:\n publish:\n role_services: <% let(resources => $.heat_resource_tree.resources)-> $.service_chain.resources.select($resources.get($)) %>\n on-success:\n - check_features: <% $.role_services %>\n - set_status_failed_get_role_services: <% not $.role_services %>\n\n check_features:\n on-success: build_feature_dict\n publish:\n # The role supports the DPDK feature if the NeutronDatapathType parameter is present\n dpdk: <% let(resources => $.heat_resource_tree.resources) -> $.role_services.any($.get('parameters', []).contains('NeutronDatapathType') or $.get('resources', []).select($resources.get($)).any($.get('parameters', []).contains('NeutronDatapathType'))) %>\n\n # The role supports the DPDK feature in ODL if the OvsEnableDpdk parameter value is true in role parameters.\n odl_dpdk: <% let(role => $.role_name) -> $.heat_resource_tree.parameters.get(concat($role, 'Parameters'), {}).get('default', {}).get('OvsEnableDpdk', false) %>\n\n # The role supports the SRIOV feature if it includes NeutronSriovAgent services.\n sriov: <% $.role_services.any($.get('type', '').endsWith('::NeutronSriovAgent')) %>\n\n # The role supports the HCI feature if it includes both NovaCompute and CephOSD services.\n hci: <% $.role_services.any($.get('type', '').endsWith('::NovaCompute')) and $.role_services.any($.get('type', '').endsWith('::CephOSD')) %>\n\n build_feature_dict:\n on-success: filter_features\n publish:\n feature_dict: <% dict(DPDK => ($.dpdk or $.odl_dpdk), SRIOV => $.sriov, HOST => ($.dpdk or $.odl_dpdk or $.sriov), HCI => $.hci) %>\n\n filter_features:\n publish:\n # The list of features that are enabled (i.e. are true in the feature_dict).\n role_features: <% let(feature_dict => $.feature_dict)-> $feature_dict.keys().where($feature_dict[$]) %>\n\n set_status_failed_get_resource_chains:\n publish:\n message: <% 'Unable to locate any resource chains in the heat resource tree' %>\n on-success: fail\n\n set_status_failed_get_role_chain:\n publish:\n message: <% \"Unable to determine the service chain resource for role '{0}'\".format($.role_name) %>\n on-success: fail\n\n set_status_failed_get_service_chain:\n publish:\n message: <% \"Unable to determine the service chain for role '{0}'\".format($.role_name) %>\n on-success: fail\n\n set_status_failed_get_role_services:\n publish:\n message: <% \"Unable to determine list of services for role '{0}'\".format($.role_name) %>\n on-success: fail\n", "name": "tripleo.derive_params.v1", "tags": [], "created_at": "2018-06-26 04:26:47", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "dac295e6-ddf0-41ad-b133-ba413692718b"} > >2018-06-26 09:56:47,687 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:47,688 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '--- >version: '2.0' >name: tripleo.swift_rings_backup.v1 >description: TripleO Swift Rings backup container Deployment Workflow v1 > >workflows: > > create_swift_rings_backup_container_plan: > description: > > This plan ensures existence of container for Swift Rings backup. > input: > - container > - queue_name: tripleo > tags: > - tripleo-common-managed > tasks: > > swift_rings_container: > publish: > swift_rings_container: "<% $.container %>-swift-rings" > swift_rings_tar: "swift-rings.tar.gz" > on-complete: check_container > > check_container: > action: swift.head_container container=<% $.swift_rings_container %> > on-success: get_tempurl > on-error: create_container > > create_container: > action: swift.put_container container=<% $.swift_rings_container %> > on-error: set_create_container_failed > on-success: get_tempurl > > get_tempurl: > action: tripleo.swift.tempurl > on-success: set_get_tempurl > input: > container: <% $.swift_rings_container %> > obj: <% $.swift_rings_tar %> > > set_get_tempurl: > action: tripleo.parameters.update > input: > parameters: > SwiftRingGetTempurl: <% task(get_tempurl).result %> > container: <% $.container %> > on-success: put_tempurl > > put_tempurl: > action: tripleo.swift.tempurl > on-success: set_put_tempurl > input: > container: <% $.swift_rings_container %> > obj: <% $.swift_rings_tar %> > method: "PUT" > > set_put_tempurl: > action: tripleo.parameters.update > input: > parameters: > SwiftRingPutTempurl: <% task(put_tempurl).result %> > container: <% $.container %> > on-success: set_status_success > on-error: set_put_tempurl_failed > > set_status_success: > on-success: notify_zaqar > publish: > status: SUCCESS > message: <% task(set_put_tempurl).result %> > > set_put_tempurl_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(set_put_tempurl).result %> > > set_create_container_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(create_container).result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 09:56:47,910 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 3154 >2018-06-26 09:56:47,911 DEBUG: RESP: [201] Content-Length: 3154 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:47 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.swift_rings_backup.v1\ndescription: TripleO Swift Rings backup container Deployment Workflow v1\n\nworkflows:\n\n create_swift_rings_backup_container_plan:\n description: >\n This plan ensures existence of container for Swift Rings backup.\n input:\n - container\n - queue_name: tripleo\n tags:\n - tripleo-common-managed\n tasks:\n\n swift_rings_container:\n publish:\n swift_rings_container: \"<% $.container %>-swift-rings\"\n swift_rings_tar: \"swift-rings.tar.gz\"\n on-complete: check_container\n\n check_container:\n action: swift.head_container container=<% $.swift_rings_container %>\n on-success: get_tempurl\n on-error: create_container\n\n create_container:\n action: swift.put_container container=<% $.swift_rings_container %>\n on-error: set_create_container_failed\n on-success: get_tempurl\n\n get_tempurl:\n action: tripleo.swift.tempurl\n on-success: set_get_tempurl\n input:\n container: <% $.swift_rings_container %>\n obj: <% $.swift_rings_tar %>\n\n set_get_tempurl:\n action: tripleo.parameters.update\n input:\n parameters:\n SwiftRingGetTempurl: <% task(get_tempurl).result %>\n container: <% $.container %>\n on-success: put_tempurl\n\n put_tempurl:\n action: tripleo.swift.tempurl\n on-success: set_put_tempurl\n input:\n container: <% $.swift_rings_container %>\n obj: <% $.swift_rings_tar %>\n method: \"PUT\"\n\n set_put_tempurl:\n action: tripleo.parameters.update\n input:\n parameters:\n SwiftRingPutTempurl: <% task(put_tempurl).result %>\n container: <% $.container %>\n on-success: set_status_success\n on-error: set_put_tempurl_failed\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(set_put_tempurl).result %>\n\n set_put_tempurl_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(set_put_tempurl).result %>\n\n set_create_container_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_container).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.swift_rings_backup.v1", "tags": [], "created_at": "2018-06-26 04:26:47", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "f74f74ae-84a0-46fa-b70b-2c871323eb99"} > >2018-06-26 09:56:47,911 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 09:56:47,911 INFO: Mistral workbooks configured successfully >2018-06-26 09:56:47,914 DEBUG: Starting new HTTP connection (1): 192.0.3.1 >2018-06-26 09:56:48,560 DEBUG: http://192.0.3.1:8080 "GET /v1/AUTH_13835fbb8e0947a9b3fa174b9a22cdb9?format=json HTTP/1.1" 200 2 >2018-06-26 09:56:48,561 DEBUG: REQ: curl -i http://192.0.3.1:8080/v1/AUTH_13835fbb8e0947a9b3fa174b9a22cdb9?format=json -X GET -H "Accept-Encoding: gzip" -H "X-Auth-Token: gAAAAABbMcBQlzff..." >2018-06-26 09:56:48,561 DEBUG: RESP STATUS: 200 OK >2018-06-26 09:56:48,561 DEBUG: RESP HEADERS: {u'Content-Length': u'2', u'X-Put-Timestamp': u'1529987208.55888', u'X-Account-Object-Count': u'0', u'X-Timestamp': u'1529987208.55888', u'X-Trans-Id': u'tx9d63376d08024dccb38ca-005b31c087', u'Date': u'Tue, 26 Jun 2018 04:26:48 GMT', u'X-Account-Bytes-Used': u'0', u'X-Account-Container-Count': u'0', u'Content-Type': u'application/json; charset=utf-8', u'X-Openstack-Request-Id': u'tx9d63376d08024dccb38ca-005b31c087'} >2018-06-26 09:56:48,561 DEBUG: RESP BODY: [] >2018-06-26 09:56:48,562 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/environments/tripleo.undercloud-config -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:56:48,591 DEBUG: http://192.0.3.1:8989 "GET /v2/environments/tripleo.undercloud-config HTTP/1.1" 404 115 >2018-06-26 09:56:48,592 DEBUG: RESP: [404] Content-Length: 115 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:48 GMT Connection: keep-alive >RESP BODY: {"debuginfo": null, "faultcode": "Client", "faultstring": "Environment not found [name=tripleo.undercloud-config]"} > >2018-06-26 09:56:48,592 DEBUG: Request returned failure status: 404 >2018-06-26 09:56:48,592 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/environments -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '{"variables": "{\"undercloud_ceilometer_snmpd_password\": \"d8501c1a349fb1a4a0c122355ba3dacf0d9ad352\", \"undercloud_db_password\": \"password\"}", "name": "tripleo.undercloud-config", "description": "Undercloud configuration parameters"}' >2018-06-26 09:56:48,606 DEBUG: http://192.0.3.1:8989 "POST /v2/environments HTTP/1.1" 201 391 >2018-06-26 09:56:48,607 DEBUG: RESP: [201] Content-Length: 391 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:48 GMT Connection: keep-alive >RESP BODY: {"created_at": "2018-06-26 04:26:48", "description": "Undercloud configuration parameters", "variables": "{\"undercloud_ceilometer_snmpd_password\": \"d8501c1a349fb1a4a0c122355ba3dacf0d9ad352\", \"undercloud_db_password\": \"password\"}", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "732729ed-52f8-4511-8f0d-411c70f2df3f", "name": "tripleo.undercloud-config"} > >2018-06-26 09:56:48,607 DEBUG: HTTP POST http://192.0.3.1:8989/v2/environments 201 >2018-06-26 09:56:48,608 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/executions -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: application/json" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" -d '{"input": "{\"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\"}", "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "description": ""}' >2018-06-26 09:56:49,133 DEBUG: http://192.0.3.1:8989 "POST /v2/executions HTTP/1.1" 201 671 >2018-06-26 09:56:49,134 DEBUG: RESP: [201] Content-Length: 671 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:49 GMT Connection: keep-alive >RESP BODY: {"root_execution_id": null, "state_info": null, "description": "", "state": "RUNNING", "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": null, "updated_at": "2018-06-26 04:26:49", "workflow_id": "f4a9f0ce-b6ab-4ced-be9e-528e0c527e31", "params": "{\"namespace\": \"\"}", "workflow_namespace": "", "output": "{}", "input": "{\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}", "created_at": "2018-06-26 04:26:49", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "3d2a008d-d243-470f-917b-9c07c49e7c3f"} > >2018-06-26 09:56:49,134 DEBUG: HTTP POST http://192.0.3.1:8989/v2/executions 201 >2018-06-26 09:56:49,134 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:56:49,148 DEBUG: http://192.0.3.1:8989 "GET /v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f HTTP/1.1" 200 671 >2018-06-26 09:56:49,149 DEBUG: RESP: [200] Content-Length: 671 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:49 GMT Connection: keep-alive >RESP BODY: {"root_execution_id": null, "state_info": null, "description": "", "state": "RUNNING", "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": null, "updated_at": "2018-06-26 04:26:49", "workflow_id": "f4a9f0ce-b6ab-4ced-be9e-528e0c527e31", "params": "{\"namespace\": \"\"}", "workflow_namespace": "", "output": "{}", "input": "{\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}", "created_at": "2018-06-26 04:26:49", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "3d2a008d-d243-470f-917b-9c07c49e7c3f"} > >2018-06-26 09:56:49,149 DEBUG: HTTP GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f 200 >2018-06-26 09:56:54,154 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:56:54,166 DEBUG: http://192.0.3.1:8989 "GET /v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f HTTP/1.1" 200 671 >2018-06-26 09:56:54,167 DEBUG: RESP: [200] Content-Length: 671 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:54 GMT Connection: keep-alive >RESP BODY: {"root_execution_id": null, "state_info": null, "description": "", "state": "RUNNING", "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": null, "updated_at": "2018-06-26 04:26:49", "workflow_id": "f4a9f0ce-b6ab-4ced-be9e-528e0c527e31", "params": "{\"namespace\": \"\"}", "workflow_namespace": "", "output": "{}", "input": "{\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}", "created_at": "2018-06-26 04:26:49", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "3d2a008d-d243-470f-917b-9c07c49e7c3f"} > >2018-06-26 09:56:54,167 DEBUG: HTTP GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f 200 >2018-06-26 09:56:59,172 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:56:59,184 DEBUG: http://192.0.3.1:8989 "GET /v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f HTTP/1.1" 200 671 >2018-06-26 09:56:59,184 DEBUG: RESP: [200] Content-Length: 671 Content-Type: application/json Date: Tue, 26 Jun 2018 04:26:59 GMT Connection: keep-alive >RESP BODY: {"root_execution_id": null, "state_info": null, "description": "", "state": "RUNNING", "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": null, "updated_at": "2018-06-26 04:26:49", "workflow_id": "f4a9f0ce-b6ab-4ced-be9e-528e0c527e31", "params": "{\"namespace\": \"\"}", "workflow_namespace": "", "output": "{}", "input": "{\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}", "created_at": "2018-06-26 04:26:49", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "3d2a008d-d243-470f-917b-9c07c49e7c3f"} > >2018-06-26 09:56:59,185 DEBUG: HTTP GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f 200 >2018-06-26 09:57:04,189 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:57:04,202 DEBUG: http://192.0.3.1:8989 "GET /v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f HTTP/1.1" 200 671 >2018-06-26 09:57:04,203 DEBUG: RESP: [200] Content-Length: 671 Content-Type: application/json Date: Tue, 26 Jun 2018 04:27:04 GMT Connection: keep-alive >RESP BODY: {"root_execution_id": null, "state_info": null, "description": "", "state": "RUNNING", "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": null, "updated_at": "2018-06-26 04:26:49", "workflow_id": "f4a9f0ce-b6ab-4ced-be9e-528e0c527e31", "params": "{\"namespace\": \"\"}", "workflow_namespace": "", "output": "{}", "input": "{\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}", "created_at": "2018-06-26 04:26:49", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "3d2a008d-d243-470f-917b-9c07c49e7c3f"} > >2018-06-26 09:57:04,203 DEBUG: HTTP GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f 200 >2018-06-26 09:57:09,207 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:57:09,222 DEBUG: http://192.0.3.1:8989 "GET /v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f HTTP/1.1" 200 671 >2018-06-26 09:57:09,222 DEBUG: RESP: [200] Content-Length: 671 Content-Type: application/json Date: Tue, 26 Jun 2018 04:27:09 GMT Connection: keep-alive >RESP BODY: {"root_execution_id": null, "state_info": null, "description": "", "state": "RUNNING", "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": null, "updated_at": "2018-06-26 04:26:49", "workflow_id": "f4a9f0ce-b6ab-4ced-be9e-528e0c527e31", "params": "{\"namespace\": \"\"}", "workflow_namespace": "", "output": "{}", "input": "{\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}", "created_at": "2018-06-26 04:26:49", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "3d2a008d-d243-470f-917b-9c07c49e7c3f"} > >2018-06-26 09:57:09,223 DEBUG: HTTP GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f 200 >2018-06-26 09:57:14,227 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:57:14,241 DEBUG: http://192.0.3.1:8989 "GET /v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f HTTP/1.1" 200 671 >2018-06-26 09:57:14,242 DEBUG: RESP: [200] Content-Length: 671 Content-Type: application/json Date: Tue, 26 Jun 2018 04:27:14 GMT Connection: keep-alive >RESP BODY: {"root_execution_id": null, "state_info": null, "description": "", "state": "RUNNING", "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": null, "updated_at": "2018-06-26 04:26:49", "workflow_id": "f4a9f0ce-b6ab-4ced-be9e-528e0c527e31", "params": "{\"namespace\": \"\"}", "workflow_namespace": "", "output": "{}", "input": "{\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}", "created_at": "2018-06-26 04:26:49", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "3d2a008d-d243-470f-917b-9c07c49e7c3f"} > >2018-06-26 09:57:14,242 DEBUG: HTTP GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f 200 >2018-06-26 09:57:19,247 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:57:19,258 DEBUG: http://192.0.3.1:8989 "GET /v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f HTTP/1.1" 200 671 >2018-06-26 09:57:19,259 DEBUG: RESP: [200] Content-Length: 671 Content-Type: application/json Date: Tue, 26 Jun 2018 04:27:19 GMT Connection: keep-alive >RESP BODY: {"root_execution_id": null, "state_info": null, "description": "", "state": "RUNNING", "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": null, "updated_at": "2018-06-26 04:26:49", "workflow_id": "f4a9f0ce-b6ab-4ced-be9e-528e0c527e31", "params": "{\"namespace\": \"\"}", "workflow_namespace": "", "output": "{}", "input": "{\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}", "created_at": "2018-06-26 04:26:49", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "3d2a008d-d243-470f-917b-9c07c49e7c3f"} > >2018-06-26 09:57:19,259 DEBUG: HTTP GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f 200 >2018-06-26 09:57:24,260 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:57:24,271 DEBUG: http://192.0.3.1:8989 "GET /v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f HTTP/1.1" 200 671 >2018-06-26 09:57:24,271 DEBUG: RESP: [200] Content-Length: 671 Content-Type: application/json Date: Tue, 26 Jun 2018 04:27:24 GMT Connection: keep-alive >RESP BODY: {"root_execution_id": null, "state_info": null, "description": "", "state": "RUNNING", "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": null, "updated_at": "2018-06-26 04:26:49", "workflow_id": "f4a9f0ce-b6ab-4ced-be9e-528e0c527e31", "params": "{\"namespace\": \"\"}", "workflow_namespace": "", "output": "{}", "input": "{\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}", "created_at": "2018-06-26 04:26:49", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "3d2a008d-d243-470f-917b-9c07c49e7c3f"} > >2018-06-26 09:57:24,271 DEBUG: HTTP GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f 200 >2018-06-26 09:57:29,276 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:57:29,288 DEBUG: http://192.0.3.1:8989 "GET /v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f HTTP/1.1" 200 671 >2018-06-26 09:57:29,289 DEBUG: RESP: [200] Content-Length: 671 Content-Type: application/json Date: Tue, 26 Jun 2018 04:27:29 GMT Connection: keep-alive >RESP BODY: {"root_execution_id": null, "state_info": null, "description": "", "state": "RUNNING", "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": null, "updated_at": "2018-06-26 04:26:49", "workflow_id": "f4a9f0ce-b6ab-4ced-be9e-528e0c527e31", "params": "{\"namespace\": \"\"}", "workflow_namespace": "", "output": "{}", "input": "{\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}", "created_at": "2018-06-26 04:26:49", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "3d2a008d-d243-470f-917b-9c07c49e7c3f"} > >2018-06-26 09:57:29,289 DEBUG: HTTP GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f 200 >2018-06-26 09:57:34,294 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:57:34,308 DEBUG: http://192.0.3.1:8989 "GET /v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f HTTP/1.1" 200 671 >2018-06-26 09:57:34,309 DEBUG: RESP: [200] Content-Length: 671 Content-Type: application/json Date: Tue, 26 Jun 2018 04:27:34 GMT Connection: keep-alive >RESP BODY: {"root_execution_id": null, "state_info": null, "description": "", "state": "RUNNING", "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": null, "updated_at": "2018-06-26 04:26:49", "workflow_id": "f4a9f0ce-b6ab-4ced-be9e-528e0c527e31", "params": "{\"namespace\": \"\"}", "workflow_namespace": "", "output": "{}", "input": "{\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}", "created_at": "2018-06-26 04:26:49", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "3d2a008d-d243-470f-917b-9c07c49e7c3f"} > >2018-06-26 09:57:34,309 DEBUG: HTTP GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f 200 >2018-06-26 09:57:39,310 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:57:39,328 DEBUG: http://192.0.3.1:8989 "GET /v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f HTTP/1.1" 200 671 >2018-06-26 09:57:39,328 DEBUG: RESP: [200] Content-Length: 671 Content-Type: application/json Date: Tue, 26 Jun 2018 04:27:39 GMT Connection: keep-alive >RESP BODY: {"root_execution_id": null, "state_info": null, "description": "", "state": "RUNNING", "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": null, "updated_at": "2018-06-26 04:26:49", "workflow_id": "f4a9f0ce-b6ab-4ced-be9e-528e0c527e31", "params": "{\"namespace\": \"\"}", "workflow_namespace": "", "output": "{}", "input": "{\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}", "created_at": "2018-06-26 04:26:49", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "3d2a008d-d243-470f-917b-9c07c49e7c3f"} > >2018-06-26 09:57:39,328 DEBUG: HTTP GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f 200 >2018-06-26 09:57:44,334 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:57:44,345 DEBUG: http://192.0.3.1:8989 "GET /v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f HTTP/1.1" 200 671 >2018-06-26 09:57:44,346 DEBUG: RESP: [200] Content-Length: 671 Content-Type: application/json Date: Tue, 26 Jun 2018 04:27:44 GMT Connection: keep-alive >RESP BODY: {"root_execution_id": null, "state_info": null, "description": "", "state": "RUNNING", "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": null, "updated_at": "2018-06-26 04:26:49", "workflow_id": "f4a9f0ce-b6ab-4ced-be9e-528e0c527e31", "params": "{\"namespace\": \"\"}", "workflow_namespace": "", "output": "{}", "input": "{\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}", "created_at": "2018-06-26 04:26:49", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "3d2a008d-d243-470f-917b-9c07c49e7c3f"} > >2018-06-26 09:57:44,346 DEBUG: HTTP GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f 200 >2018-06-26 09:57:49,349 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:57:49,361 DEBUG: http://192.0.3.1:8989 "GET /v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f HTTP/1.1" 200 671 >2018-06-26 09:57:49,361 DEBUG: RESP: [200] Content-Length: 671 Content-Type: application/json Date: Tue, 26 Jun 2018 04:27:49 GMT Connection: keep-alive >RESP BODY: {"root_execution_id": null, "state_info": null, "description": "", "state": "RUNNING", "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": null, "updated_at": "2018-06-26 04:26:49", "workflow_id": "f4a9f0ce-b6ab-4ced-be9e-528e0c527e31", "params": "{\"namespace\": \"\"}", "workflow_namespace": "", "output": "{}", "input": "{\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}", "created_at": "2018-06-26 04:26:49", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "3d2a008d-d243-470f-917b-9c07c49e7c3f"} > >2018-06-26 09:57:49,361 DEBUG: HTTP GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f 200 >2018-06-26 09:57:54,362 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:57:54,376 DEBUG: http://192.0.3.1:8989 "GET /v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f HTTP/1.1" 200 118996 >2018-06-26 09:57:54,380 DEBUG: RESP: [200] Content-Length: 118996 Content-Type: application/json Date: Tue, 26 Jun 2018 04:27:54 GMT Connection: keep-alive >RESP BODY: {"root_execution_id": null, "state_info": "Failure caused by error in tasks: notify_zaqar\n\n notify_zaqar [task_ex_id=1a2b25c9-3deb-40dc-90de-d8d2dd80e1ba] -> Failed to run action [action_ex_id=cd52422f-5aec-4880-9286-a3885fbbab1b, action_cls='<class 'mistral.actions.action_factory.ZaqarAction'>', attributes='{u'client_method_name': u'queue_post'}', params='{u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'SUCCESS', u'message': u'Plan created.', u'execution': {u'name': u'tripleo.plan_management.v1.create_deployment_plan', u'created_at': u'2018-06-26 04:26:49', u'updated_at': u'2018-06-26 04:26:49', u'spec': {u'tasks': {u'templates_source_check': {u'version': u'2.0', u'type': u'direct', u'name': u'templates_source_check', u'on-success': [{u'upload_default_templates': u'<% $.use_default_templates = true %>'}, {u'clone_git_repo': u'<% $.source_url != null %>'}]}, u'create_plan': {u'version': u'2.0', u'type': u'direct', u'name': u'create_plan', u'on-success': [{u'ensure_passwords_exist': u'<% $.generate_passwords = true %>'}, {u'add_root_stack_name': u'<% $.generate_passwords != true %>'}]}, u'upload_templates_directory_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_templates_directory_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_templates_directory).result %>'}, u'on-success': u'notify_zaqar'}, u'ensure_passwords_exist': {u'name': u'ensure_passwords_exist', u'on-error': u'ensure_passwords_exist_set_status_failed', u'on-success': u'add_root_stack_name', u'version': u'2.0', u'action': u'tripleo.parameters.generate_passwords container=<% $.container %>', u'type': u'direct'}, u'process_templates_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'process_templates_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(process_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'container_images_prepare': {u'name': u'container_images_prepare', u'on-error': u'container_images_prepare_set_status_failed', u'on-success': u'process_templates', u'version': u'2.0', u'action': u'tripleo.container_images.prepare container=<% $.container %>', u'type': u'direct', u'description': u'Populate all container image parameters with default values.\\n'}, u'process_templates': {u'name': u'process_templates', u'on-error': u'process_templates_set_status_failed', u'on-success': u'set_status_success', u'version': u'2.0', u'action': u'tripleo.templates.process container=<% $.container %>', u'type': u'direct'}, u'container_required_check': {u'on-success': [{u'verify_container_doesnt_exist': u'<% $.use_default_templates or $.source_url %>'}, {u'create_plan': u'<% $.use_default_templates = false and $.source_url = null %>'}], u'version': u'2.0', u'type': u'direct', u'description': u'If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\n', u'name': u'container_required_check'}, u'clone_git_repo': {u'name': u'clone_git_repo', u'on-error': u'clone_git_repo_set_status_failed', u'on-success': u'upload_templates_directory', u'version': u'2.0', u'action': u'tripleo.git.clone container=<% $.container %> url=<% $.source_url %>', u'type': u'direct'}, u'add_root_stack_name': {u'name': u'add_root_stack_name', u'on-error': u'notify_zaqar', u'on-success': u'container_images_prepare', u'publish-on-error': {u'status': u'FAILED', u'message': u'<% task().result %>'}, u'version': u'2.0', u'action': u'tripleo.parameters.update', u'input': {u'container': u'<% $.container %>', u'parameters': {u'RootStackName': u'<% $.container %>'}}, u'type': u'direct'}, u'upload_default_templates': {u'name': u'upload_default_templates', u'on-error': u'upload_to_container_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %>', u'type': u'direct'}, u'clone_git_repo_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'clone_git_repo_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(clone_git_repo).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_to_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_to_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_default_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_templates_directory': {u'name': u'upload_templates_directory', u'on-error': u'upload_templates_directory_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>', u'on-complete': u'cleanup_temporary_files', u'type': u'direct'}, u'container_images_prepare_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'container_images_prepare_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(container_images_prepare).result %>'}, u'on-success': u'notify_zaqar'}, u'create_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'create_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(create_container).result %>'}, u'on-success': u'notify_zaqar'}, u'set_status_success': {u'version': u'2.0', u'type': u'direct', u'name': u'set_status_success', u'publish': {u'status': u'SUCCESS', u'message': u'Plan created.'}, u'on-success': u'notify_zaqar'}, u'verify_container_doesnt_exist': {u'name': u'verify_container_doesnt_exist', u'on-error': u'create_container', u'on-success': u'notify_zaqar', u'publish': {u'status': u'FAILED', u'message': u'Unable to create plan. The Swift container already exists'}, u'version': u'2.0', u'action': u'swift.head_container container=<% $.container %>', u'type': u'direct'}, u'notify_zaqar': {u'retry': u'count=5 delay=1', u'name': u'notify_zaqar', u'on-success': [{u'fail': u'<% $.get(\\'status\\') = \"FAILED\" %>'}], u'version': u'2.0', u'action': u'zaqar.queue_post', u'input': {u'queue_name': u'<% $.queue_name %>', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'<% $.status %>', u'message': u\"<% $.get('message', '') %>\", u'execution': u'<% execution() %>'}}}}, u'type': u'direct'}, u'cleanup_temporary_files': {u'action': u'tripleo.git.clean container=<% $.container %>', u'version': u'2.0', u'type': u'direct', u'name': u'cleanup_temporary_files'}, u'create_container': {u'name': u'create_container', u'on-error': u'create_container_set_status_failed', u'on-success': u'templates_source_check', u'version': u'2.0', u'action': u'tripleo.plan.create_container container=<% $.container %>', u'type': u'direct'}, u'ensure_passwords_exist_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'ensure_passwords_exist_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(ensure_passwords_exist).result %>'}, u'on-success': u'notify_zaqar'}}, u'name': u'create_deployment_plan', u'tags': [u'tripleo-common-managed'], u'version': u'2.0', u'input': [u'container', {u'source_url': None}, {u'queue_name': u'tripleo'}, {u'generate_passwords': True}, {u'use_default_templates': False}], u'description': u'This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\n'}, u'params': {u'namespace': u''}, u'input': {u'generate_passwords': True, u'use_default_templates': True, u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'container': u'overcloud', u'source_url': None}, u'id': u'3d2a008d-d243-470f-917b-9c07c49e7c3f'}}}}}']\n ZaqarAction.queue_post failed: Error response from Zaqar. Code: 500. Title: Internal server error. Description: (pymysql.err.ProgrammingError) (1146, u\"Table 'zaqar.Queues' doesn't exist\") [SQL: u'SELECT `Queues`.metadata \\nFROM `Queues` \\nWHERE `Queues`.project = %(project_1)s AND `Queues`.name = %(name_1)s'] [parameters: {u'name_1': '2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'project_1': u'13835fbb8e0947a9b3fa174b9a22cdb9'}] (Background on this error at: http://sqlalche.me/e/f405).\n [action_ex_id=44598c23-bf24-41d7-aa2a-a4e8dcc4026e, idx=0]: Failed to run action [action_ex_id=44598c23-bf24-41d7-aa2a-a4e8dcc4026e, action_cls='<class 'mistral.actions.action_factory.ZaqarAction'>', attributes='{u'client_method_name': u'queue_post'}', params='{u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'SUCCESS', u'message': u'Plan created.', u'execution': {u'name': u'tripleo.plan_management.v1.create_deployment_plan', u'created_at': u'2018-06-26 04:26:49', u'updated_at': u'2018-06-26 04:26:49', u'spec': {u'tasks': {u'templates_source_check': {u'version': u'2.0', u'type': u'direct', u'name': u'templates_source_check', u'on-success': [{u'upload_default_templates': u'<% $.use_default_templates = true %>'}, {u'clone_git_repo': u'<% $.source_url != null %>'}]}, u'create_plan': {u'version': u'2.0', u'type': u'direct', u'name': u'create_plan', u'on-success': [{u'ensure_passwords_exist': u'<% $.generate_passwords = true %>'}, {u'add_root_stack_name': u'<% $.generate_passwords != true %>'}]}, u'upload_templates_directory_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_templates_directory_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_templates_directory).result %>'}, u'on-success': u'notify_zaqar'}, u'ensure_passwords_exist': {u'name': u'ensure_passwords_exist', u'on-error': u'ensure_passwords_exist_set_status_failed', u'on-success': u'add_root_stack_name', u'version': u'2.0', u'action': u'tripleo.parameters.generate_passwords container=<% $.container %>', u'type': u'direct'}, u'process_templates_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'process_templates_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(process_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'container_images_prepare': {u'name': u'container_images_prepare', u'on-error': u'container_images_prepare_set_status_failed', u'on-success': u'process_templates', u'version': u'2.0', u'action': u'tripleo.container_images.prepare container=<% $.container %>', u'type': u'direct', u'description': u'Populate all container image parameters with default values.\\n'}, u'process_templates': {u'name': u'process_templates', u'on-error': u'process_templates_set_status_failed', u'on-success': u'set_status_success', u'version': u'2.0', u'action': u'tripleo.templates.process container=<% $.container %>', u'type': u'direct'}, u'container_required_check': {u'on-success': [{u'verify_container_doesnt_exist': u'<% $.use_default_templates or $.source_url %>'}, {u'create_plan': u'<% $.use_default_templates = false and $.source_url = null %>'}], u'version': u'2.0', u'type': u'direct', u'description': u'If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\n', u'name': u'container_required_check'}, u'clone_git_repo': {u'name': u'clone_git_repo', u'on-error': u'clone_git_repo_set_status_failed', u'on-success': u'upload_templates_directory', u'version': u'2.0', u'action': u'tripleo.git.clone container=<% $.container %> url=<% $.source_url %>', u'type': u'direct'}, u'add_root_stack_name': {u'name': u'add_root_stack_name', u'on-error': u'notify_zaqar', u'on-success': u'container_images_prepare', u'publish-on-error': {u'status': u'FAILED', u'message': u'<% task().result %>'}, u'version': u'2.0', u'action': u'tripleo.parameters.update', u'input': {u'container': u'<% $.container %>', u'parameters': {u'RootStackName': u'<% $.container %>'}}, u'type': u'direct'}, u'upload_default_templates': {u'name': u'upload_default_templates', u'on-error': u'upload_to_container_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %>', u'type': u'direct'}, u'clone_git_repo_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'clone_git_repo_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(clone_git_repo).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_to_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_to_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_default_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_templates_directory': {u'name': u'upload_templates_directory', u'on-error': u'upload_templates_directory_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>', u'on-complete': u'cleanup_temporary_files', u'type': u'direct'}, u'container_images_prepare_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'container_images_prepare_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(container_images_prepare).result %>'}, u'on-success': u'notify_zaqar'}, u'create_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'create_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(create_container).result %>'}, u'on-success': u'notify_zaqar'}, u'set_status_success': {u'version': u'2.0', u'type': u'direct', u'name': u'set_status_success', u'publish': {u'status': u'SUCCESS', u'message': u'Plan created.'}, u'on-success': u'notify_zaqar'}, u'verify_container_doesnt_exist': {u'name': u'verify_container_doesnt_exist', u'on-error': u'create_container', u'on-success': u'notify_zaqar', u'publish': {u'status': u'FAILED', u'message': u'Unable to create plan. The Swift container already exists'}, u'version': u'2.0', u'action': u'swift.head_container container=<% $.container %>', u'type': u'direct'}, u'notify_zaqar': {u'retry': u'count=5 delay=1', u'name': u'notify_zaqar', u'on-success': [{u'fail': u'<% $.get(\\'status\\') = \"FAILED\" %>'}], u'version': u'2.0', u'action': u'zaqar.queue_post', u'input': {u'queue_name': u'<% $.queue_name %>', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'<% $.status %>', u'message': u\"<% $.get('message', '') %>\", u'execution': u'<% execution() %>'}}}}, u'type': u'direct'}, u'cleanup_temporary_files': {u'action': u'tripleo.git.clean container=<% $.container %>', u'version': u'2.0', u'type': u'direct', u'name': u'cleanup_temporary_files'}, u'create_container': {u'name': u'create_container', u'on-error': u'create_container_set_status_failed', u'on-success': u'templates_source_check', u'version': u'2.0', u'action': u'tripleo.plan.create_container container=<% $.container %>', u'type': u'direct'}, u'ensure_passwords_exist_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'ensure_passwords_exist_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(ensure_passwords_exist).result %>'}, u'on-success': u'notify_zaqar'}}, u'name': u'create_deployment_plan', u'tags': [u'tripleo-common-managed'], u'version': u'2.0', u'input': [u'container', {u'source_url': None}, {u'queue_name': u'tripleo'}, {u'generate_passwords': True}, {u'use_default_templates': False}], u'description': u'This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\n'}, u'params': {u'namespace': u''}, u'input': {u'generate_passwords': True, u'use_default_templates': True, u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'container': u'overcloud', u'source_url': None}, u'id': u'3d2a008d-d243-470f-917b-9c07c49e7c3f'}}}}}']\n ZaqarAction.queue_post failed: Error response from Zaqar. Code: 500. Title: Internal server error. Description: (pymysql.err.ProgrammingError) (1146, u\"Table 'zaqar.Queues' doesn't exist\") [SQL: u'SELECT `Queues`.metadata \\nFROM `Queues` \\nWHERE `Queues`.project = %(project_1)s AND `Queues`.name = %(name_1)s'] [parameters: {u'name_1': '2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'project_1': u'13835fbb8e0947a9b3fa174b9a22cdb9'}] (Background on this error at: http://sqlalche.me/e/f405).\n [action_ex_id=5331a8c3-aca1-4e37-a559-f93896d0fe2a, idx=1]: Failed to run action [action_ex_id=5331a8c3-aca1-4e37-a559-f93896d0fe2a, action_cls='<class 'mistral.actions.action_factory.ZaqarAction'>', attributes='{u'client_method_name': u'queue_post'}', params='{u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'SUCCESS', u'message': u'Plan created.', u'execution': {u'name': u'tripleo.plan_management.v1.create_deployment_plan', u'created_at': u'2018-06-26 04:26:49', u'updated_at': u'2018-06-26 04:26:49', u'spec': {u'tasks': {u'templates_source_check': {u'version': u'2.0', u'type': u'direct', u'name': u'templates_source_check', u'on-success': [{u'upload_default_templates': u'<% $.use_default_templates = true %>'}, {u'clone_git_repo': u'<% $.source_url != null %>'}]}, u'create_plan': {u'version': u'2.0', u'type': u'direct', u'name': u'create_plan', u'on-success': [{u'ensure_passwords_exist': u'<% $.generate_passwords = true %>'}, {u'add_root_stack_name': u'<% $.generate_passwords != true %>'}]}, u'upload_templates_directory_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_templates_directory_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_templates_directory).result %>'}, u'on-success': u'notify_zaqar'}, u'ensure_passwords_exist': {u'name': u'ensure_passwords_exist', u'on-error': u'ensure_passwords_exist_set_status_failed', u'on-success': u'add_root_stack_name', u'version': u'2.0', u'action': u'tripleo.parameters.generate_passwords container=<% $.container %>', u'type': u'direct'}, u'process_templates_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'process_templates_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(process_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'container_images_prepare': {u'name': u'container_images_prepare', u'on-error': u'container_images_prepare_set_status_failed', u'on-success': u'process_templates', u'version': u'2.0', u'action': u'tripleo.container_images.prepare container=<% $.container %>', u'type': u'direct', u'description': u'Populate all container image parameters with default values.\\n'}, u'process_templates': {u'name': u'process_templates', u'on-error': u'process_templates_set_status_failed', u'on-success': u'set_status_success', u'version': u'2.0', u'action': u'tripleo.templates.process container=<% $.container %>', u'type': u'direct'}, u'container_required_check': {u'on-success': [{u'verify_container_doesnt_exist': u'<% $.use_default_templates or $.source_url %>'}, {u'create_plan': u'<% $.use_default_templates = false and $.source_url = null %>'}], u'version': u'2.0', u'type': u'direct', u'description': u'If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\n', u'name': u'container_required_check'}, u'clone_git_repo': {u'name': u'clone_git_repo', u'on-error': u'clone_git_repo_set_status_failed', u'on-success': u'upload_templates_directory', u'version': u'2.0', u'action': u'tripleo.git.clone container=<% $.container %> url=<% $.source_url %>', u'type': u'direct'}, u'add_root_stack_name': {u'name': u'add_root_stack_name', u'on-error': u'notify_zaqar', u'on-success': u'container_images_prepare', u'publish-on-error': {u'status': u'FAILED', u'message': u'<% task().result %>'}, u'version': u'2.0', u'action': u'tripleo.parameters.update', u'input': {u'container': u'<% $.container %>', u'parameters': {u'RootStackName': u'<% $.container %>'}}, u'type': u'direct'}, u'upload_default_templates': {u'name': u'upload_default_templates', u'on-error': u'upload_to_container_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %>', u'type': u'direct'}, u'clone_git_repo_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'clone_git_repo_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(clone_git_repo).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_to_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_to_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_default_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_templates_directory': {u'name': u'upload_templates_directory', u'on-error': u'upload_templates_directory_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>', u'on-complete': u'cleanup_temporary_files', u'type': u'direct'}, u'container_images_prepare_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'container_images_prepare_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(container_images_prepare).result %>'}, u'on-success': u'notify_zaqar'}, u'create_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'create_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(create_container).result %>'}, u'on-success': u'notify_zaqar'}, u'set_status_success': {u'version': u'2.0', u'type': u'direct', u'name': u'set_status_success', u'publish': {u'status': u'SUCCESS', u'message': u'Plan created.'}, u'on-success': u'notify_zaqar'}, u'verify_container_doesnt_exist': {u'name': u'verify_container_doesnt_exist', u'on-error': u'create_container', u'on-success': u'notify_zaqar', u'publish': {u'status': u'FAILED', u'message': u'Unable to create plan. The Swift container already exists'}, u'version': u'2.0', u'action': u'swift.head_container container=<% $.container %>', u'type': u'direct'}, u'notify_zaqar': {u'retry': u'count=5 delay=1', u'name': u'notify_zaqar', u'on-success': [{u'fail': u'<% $.get(\\'status\\') = \"FAILED\" %>'}], u'version': u'2.0', u'action': u'zaqar.queue_post', u'input': {u'queue_name': u'<% $.queue_name %>', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'<% $.status %>', u'message': u\"<% $.get('message', '') %>\", u'execution': u'<% execution() %>'}}}}, u'type': u'direct'}, u'cleanup_temporary_files': {u'action': u'tripleo.git.clean container=<% $.container %>', u'version': u'2.0', u'type': u'direct', u'name': u'cleanup_temporary_files'}, u'create_container': {u'name': u'create_container', u'on-error': u'create_container_set_status_failed', u'on-success': u'templates_source_check', u'version': u'2.0', u'action': u'tripleo.plan.create_container container=<% $.container %>', u'type': u'direct'}, u'ensure_passwords_exist_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'ensure_passwords_exist_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(ensure_passwords_exist).result %>'}, u'on-success': u'notify_zaqar'}}, u'name': u'create_deployment_plan', u'tags': [u'tripleo-common-managed'], u'version': u'2.0', u'input': [u'container', {u'source_url': None}, {u'queue_name': u'tripleo'}, {u'generate_passwords': True}, {u'use_default_templates': False}], u'description': u'This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\n'}, u'params': {u'namespace': u''}, u'input': {u'generate_passwords': True, u'use_default_templates': True, u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'container': u'overcloud', u'source_url': None}, u'id': u'3d2a008d-d243-470f-917b-9c07c49e7c3f'}}}}}']\n ZaqarAction.queue_post failed: Error response from Zaqar. Code: 500. Title: Internal server error. Description: (pymysql.err.ProgrammingError) (1146, u\"Table 'zaqar.Queues' doesn't exist\") [SQL: u'SELECT `Queues`.metadata \\nFROM `Queues` \\nWHERE `Queues`.project = %(project_1)s AND `Queues`.name = %(name_1)s'] [parameters: {u'name_1': '2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'project_1': u'13835fbb8e0947a9b3fa174b9a22cdb9'}] (Background on this error at: http://sqlalche.me/e/f405).\n [action_ex_id=62fe26ef-36b6-4b29-ae9e-47f56efad77b, idx=2]: Failed to run action [action_ex_id=62fe26ef-36b6-4b29-ae9e-47f56efad77b, action_cls='<class 'mistral.actions.action_factory.ZaqarAction'>', attributes='{u'client_method_name': u'queue_post'}', params='{u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'SUCCESS', u'message': u'Plan created.', u'execution': {u'name': u'tripleo.plan_management.v1.create_deployment_plan', u'created_at': u'2018-06-26 04:26:49', u'updated_at': u'2018-06-26 04:26:49', u'spec': {u'tasks': {u'templates_source_check': {u'version': u'2.0', u'type': u'direct', u'name': u'templates_source_check', u'on-success': [{u'upload_default_templates': u'<% $.use_default_templates = true %>'}, {u'clone_git_repo': u'<% $.source_url != null %>'}]}, u'create_plan': {u'version': u'2.0', u'type': u'direct', u'name': u'create_plan', u'on-success': [{u'ensure_passwords_exist': u'<% $.generate_passwords = true %>'}, {u'add_root_stack_name': u'<% $.generate_passwords != true %>'}]}, u'upload_templates_directory_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_templates_directory_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_templates_directory).result %>'}, u'on-success': u'notify_zaqar'}, u'ensure_passwords_exist': {u'name': u'ensure_passwords_exist', u'on-error': u'ensure_passwords_exist_set_status_failed', u'on-success': u'add_root_stack_name', u'version': u'2.0', u'action': u'tripleo.parameters.generate_passwords container=<% $.container %>', u'type': u'direct'}, u'process_templates_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'process_templates_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(process_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'container_images_prepare': {u'name': u'container_images_prepare', u'on-error': u'container_images_prepare_set_status_failed', u'on-success': u'process_templates', u'version': u'2.0', u'action': u'tripleo.container_images.prepare container=<% $.container %>', u'type': u'direct', u'description': u'Populate all container image parameters with default values.\\n'}, u'process_templates': {u'name': u'process_templates', u'on-error': u'process_templates_set_status_failed', u'on-success': u'set_status_success', u'version': u'2.0', u'action': u'tripleo.templates.process container=<% $.container %>', u'type': u'direct'}, u'container_required_check': {u'on-success': [{u'verify_container_doesnt_exist': u'<% $.use_default_templates or $.source_url %>'}, {u'create_plan': u'<% $.use_default_templates = false and $.source_url = null %>'}], u'version': u'2.0', u'type': u'direct', u'description': u'If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\n', u'name': u'container_required_check'}, u'clone_git_repo': {u'name': u'clone_git_repo', u'on-error': u'clone_git_repo_set_status_failed', u'on-success': u'upload_templates_directory', u'version': u'2.0', u'action': u'tripleo.git.clone container=<% $.container %> url=<% $.source_url %>', u'type': u'direct'}, u'add_root_stack_name': {u'name': u'add_root_stack_name', u'on-error': u'notify_zaqar', u'on-success': u'container_images_prepare', u'publish-on-error': {u'status': u'FAILED', u'message': u'<% task().result %>'}, u'version': u'2.0', u'action': u'tripleo.parameters.update', u'input': {u'container': u'<% $.container %>', u'parameters': {u'RootStackName': u'<% $.container %>'}}, u'type': u'direct'}, u'upload_default_templates': {u'name': u'upload_default_templates', u'on-error': u'upload_to_container_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %>', u'type': u'direct'}, u'clone_git_repo_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'clone_git_repo_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(clone_git_repo).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_to_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_to_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_default_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_templates_directory': {u'name': u'upload_templates_directory', u'on-error': u'upload_templates_directory_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>', u'on-complete': u'cleanup_temporary_files', u'type': u'direct'}, u'container_images_prepare_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'container_images_prepare_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(container_images_prepare).result %>'}, u'on-success': u'notify_zaqar'}, u'create_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'create_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(create_container).result %>'}, u'on-success': u'notify_zaqar'}, u'set_status_success': {u'version': u'2.0', u'type': u'direct', u'name': u'set_status_success', u'publish': {u'status': u'SUCCESS', u'message': u'Plan created.'}, u'on-success': u'notify_zaqar'}, u'verify_container_doesnt_exist': {u'name': u'verify_container_doesnt_exist', u'on-error': u'create_container', u'on-success': u'notify_zaqar', u'publish': {u'status': u'FAILED', u'message': u'Unable to create plan. The Swift container already exists'}, u'version': u'2.0', u'action': u'swift.head_container container=<% $.container %>', u'type': u'direct'}, u'notify_zaqar': {u'retry': u'count=5 delay=1', u'name': u'notify_zaqar', u'on-success': [{u'fail': u'<% $.get(\\'status\\') = \"FAILED\" %>'}], u'version': u'2.0', u'action': u'zaqar.queue_post', u'input': {u'queue_name': u'<% $.queue_name %>', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'<% $.status %>', u'message': u\"<% $.get('message', '') %>\", u'execution': u'<% execution() %>'}}}}, u'type': u'direct'}, u'cleanup_temporary_files': {u'action': u'tripleo.git.clean container=<% $.container %>', u'version': u'2.0', u'type': u'direct', u'name': u'cleanup_temporary_files'}, u'create_container': {u'name': u'create_container', u'on-error': u'create_container_set_status_failed', u'on-success': u'templates_source_check', u'version': u'2.0', u'action': u'tripleo.plan.create_container container=<% $.container %>', u'type': u'direct'}, u'ensure_passwords_exist_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'ensure_passwords_exist_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(ensure_passwords_exist).result %>'}, u'on-success': u'notify_zaqar'}}, u'name': u'create_deployment_plan', u'tags': [u'tripleo-common-managed'], u'version': u'2.0', u'input': [u'container', {u'source_url': None}, {u'queue_name': u'tripleo'}, {u'generate_passwords': True}, {u'use_default_templates': False}], u'description': u'This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\n'}, u'params': {u'namespace': u''}, u'input': {u'generate_passwords': True, u'use_default_templates': True, u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'container': u'overcloud', u'source_url': None}, u'id': u'3d2a008d-d243-470f-917b-9c07c49e7c3f'}}}}}']\n ZaqarAction.queue_post failed: Error response from Zaqar. Code: 500. Title: Internal server error. Description: (pymysql.err.ProgrammingError) (1146, u\"Table 'zaqar.Queues' doesn't exist\") [SQL: u'SELECT `Queues`.metadata \\nFROM `Queues` \\nWHERE `Queues`.project = %(project_1)s AND `Queues`.name = %(name_1)s'] [parameters: {u'name_1': '2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'project_1': u'13835fbb8e0947a9b3fa174b9a22cdb9'}] (Background on this error at: http://sqlalche.me/e/f405).\n [action_ex_id=6acf6f9e-680b-4e4d-acc2-d905440e1822, idx=3]: Failed to run action [action_ex_id=6acf6f9e-680b-4e4d-acc2-d905440e1822, action_cls='<class 'mistral.actions.action_factory.ZaqarAction'>', attributes='{u'client_method_name': u'queue_post'}', params='{u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'SUCCESS', u'message': u'Plan created.', u'execution': {u'name': u'tripleo.plan_management.v1.create_deployment_plan', u'created_at': u'2018-06-26 04:26:49', u'updated_at': u'2018-06-26 04:26:49', u'spec': {u'tasks': {u'templates_source_check': {u'version': u'2.0', u'type': u'direct', u'name': u'templates_source_check', u'on-success': [{u'upload_default_templates': u'<% $.use_default_templates = true %>'}, {u'clone_git_repo': u'<% $.source_url != null %>'}]}, u'create_plan': {u'version': u'2.0', u'type': u'direct', u'name': u'create_plan', u'on-success': [{u'ensure_passwords_exist': u'<% $.generate_passwords = true %>'}, {u'add_root_stack_name': u'<% $.generate_passwords != true %>'}]}, u'upload_templates_directory_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_templates_directory_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_templates_directory).result %>'}, u'on-success': u'notify_zaqar'}, u'ensure_passwords_exist': {u'name': u'ensure_passwords_exist', u'on-error': u'ensure_passwords_exist_set_status_failed', u'on-success': u'add_root_stack_name', u'version': u'2.0', u'action': u'tripleo.parameters.generate_passwords container=<% $.container %>', u'type': u'direct'}, u'process_templates_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'process_templates_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(process_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'container_images_prepare': {u'name': u'container_images_prepare', u'on-error': u'container_images_prepare_set_status_failed', u'on-success': u'process_templates', u'version': u'2.0', u'action': u'tripleo.container_images.prepare container=<% $.container %>', u'type': u'direct', u'description': u'Populate all container image parameters with default values.\\n'}, u'process_templates': {u'name': u'process_templates', u'on-error': u'process_templates_set_status_failed', u'on-success': u'set_status_success', u'version': u'2.0', u'action': u'tripleo.templates.process container=<% $.container %>', u'type': u'direct'}, u'container_required_check': {u'on-success': [{u'verify_container_doesnt_exist': u'<% $.use_default_templates or $.source_url %>'}, {u'create_plan': u'<% $.use_default_templates = false and $.source_url = null %>'}], u'version': u'2.0', u'type': u'direct', u'description': u'If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\n', u'name': u'container_required_check'}, u'clone_git_repo': {u'name': u'clone_git_repo', u'on-error': u'clone_git_repo_set_status_failed', u'on-success': u'upload_templates_directory', u'version': u'2.0', u'action': u'tripleo.git.clone container=<% $.container %> url=<% $.source_url %>', u'type': u'direct'}, u'add_root_stack_name': {u'name': u'add_root_stack_name', u'on-error': u'notify_zaqar', u'on-success': u'container_images_prepare', u'publish-on-error': {u'status': u'FAILED', u'message': u'<% task().result %>'}, u'version': u'2.0', u'action': u'tripleo.parameters.update', u'input': {u'container': u'<% $.container %>', u'parameters': {u'RootStackName': u'<% $.container %>'}}, u'type': u'direct'}, u'upload_default_templates': {u'name': u'upload_default_templates', u'on-error': u'upload_to_container_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %>', u'type': u'direct'}, u'clone_git_repo_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'clone_git_repo_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(clone_git_repo).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_to_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_to_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_default_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_templates_directory': {u'name': u'upload_templates_directory', u'on-error': u'upload_templates_directory_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>', u'on-complete': u'cleanup_temporary_files', u'type': u'direct'}, u'container_images_prepare_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'container_images_prepare_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(container_images_prepare).result %>'}, u'on-success': u'notify_zaqar'}, u'create_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'create_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(create_container).result %>'}, u'on-success': u'notify_zaqar'}, u'set_status_success': {u'version': u'2.0', u'type': u'direct', u'name': u'set_status_success', u'publish': {u'status': u'SUCCESS', u'message': u'Plan created.'}, u'on-success': u'notify_zaqar'}, u'verify_container_doesnt_exist': {u'name': u'verify_container_doesnt_exist', u'on-error': u'create_container', u'on-success': u'notify_zaqar', u'publish': {u'status': u'FAILED', u'message': u'Unable to create plan. The Swift container already exists'}, u'version': u'2.0', u'action': u'swift.head_container container=<% $.container %>', u'type': u'direct'}, u'notify_zaqar': {u'retry': u'count=5 delay=1', u'name': u'notify_zaqar', u'on-success': [{u'fail': u'<% $.get(\\'status\\') = \"FAILED\" %>'}], u'version': u'2.0', u'action': u'zaqar.queue_post', u'input': {u'queue_name': u'<% $.queue_name %>', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'<% $.status %>', u'message': u\"<% $.get('message', '') %>\", u'execution': u'<% execution() %>'}}}}, u'type': u'direct'}, u'cleanup_temporary_files': {u'action': u'tripleo.git.clean container=<% $.container %>', u'version': u'2.0', u'type': u'direct', u'name': u'cleanup_temporary_files'}, u'create_container': {u'name': u'create_container', u'on-error': u'create_container_set_status_failed', u'on-success': u'templates_source_check', u'version': u'2.0', u'action': u'tripleo.plan.create_container container=<% $.container %>', u'type': u'direct'}, u'ensure_passwords_exist_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'ensure_passwords_exist_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(ensure_passwords_exist).result %>'}, u'on-success': u'notify_zaqar'}}, u'name': u'create_deployment_plan', u'tags': [u'tripleo-common-managed'], u'version': u'2.0', u'input': [u'container', {u'source_url': None}, {u'queue_name': u'tripleo'}, {u'generate_passwords': True}, {u'use_default_templates': False}], u'description': u'This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\n'}, u'params': {u'namespace': u''}, u'input': {u'generate_passwords': True, u'use_default_templates': True, u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'container': u'overcloud', u'source_url': None}, u'id': u'3d2a008d-d243-470f-917b-9c07c49e7c3f'}}}}}']\n ZaqarAction.queue_post failed: Error response from Zaqar. Code: 500. Title: Internal server error. Description: (pymysql.err.ProgrammingError) (1146, u\"Table 'zaqar.Queues' doesn't exist\") [SQL: u'SELECT `Queues`.metadata \\nFROM `Queues` \\nWHERE `Queues`.project = %(project_1)s AND `Queues`.name = %(name_1)s'] [parameters: {u'name_1': '2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'project_1': u'13835fbb8e0947a9b3fa174b9a22cdb9'}] (Background on this error at: http://sqlalche.me/e/f405).\n [action_ex_id=b844ff2c-b34e-4a84-915d-3592a0f7976b, idx=4]: Failed to run action [action_ex_id=b844ff2c-b34e-4a84-915d-3592a0f7976b, action_cls='<class 'mistral.actions.action_factory.ZaqarAction'>', attributes='{u'client_method_name': u'queue_post'}', params='{u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'SUCCESS', u'message': u'Plan created.', u'execution': {u'name': u'tripleo.plan_management.v1.create_deployment_plan', u'created_at': u'2018-06-26 04:26:49', u'updated_at': u'2018-06-26 04:26:49', u'spec': {u'tasks': {u'templates_source_check': {u'version': u'2.0', u'type': u'direct', u'name': u'templates_source_check', u'on-success': [{u'upload_default_templates': u'<% $.use_default_templates = true %>'}, {u'clone_git_repo': u'<% $.source_url != null %>'}]}, u'create_plan': {u'version': u'2.0', u'type': u'direct', u'name': u'create_plan', u'on-success': [{u'ensure_passwords_exist': u'<% $.generate_passwords = true %>'}, {u'add_root_stack_name': u'<% $.generate_passwords != true %>'}]}, u'upload_templates_directory_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_templates_directory_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_templates_directory).result %>'}, u'on-success': u'notify_zaqar'}, u'ensure_passwords_exist': {u'name': u'ensure_passwords_exist', u'on-error': u'ensure_passwords_exist_set_status_failed', u'on-success': u'add_root_stack_name', u'version': u'2.0', u'action': u'tripleo.parameters.generate_passwords container=<% $.container %>', u'type': u'direct'}, u'process_templates_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'process_templates_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(process_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'container_images_prepare': {u'name': u'container_images_prepare', u'on-error': u'container_images_prepare_set_status_failed', u'on-success': u'process_templates', u'version': u'2.0', u'action': u'tripleo.container_images.prepare container=<% $.container %>', u'type': u'direct', u'description': u'Populate all container image parameters with default values.\\n'}, u'process_templates': {u'name': u'process_templates', u'on-error': u'process_templates_set_status_failed', u'on-success': u'set_status_success', u'version': u'2.0', u'action': u'tripleo.templates.process container=<% $.container %>', u'type': u'direct'}, u'container_required_check': {u'on-success': [{u'verify_container_doesnt_exist': u'<% $.use_default_templates or $.source_url %>'}, {u'create_plan': u'<% $.use_default_templates = false and $.source_url = null %>'}], u'version': u'2.0', u'type': u'direct', u'description': u'If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\n', u'name': u'container_required_check'}, u'clone_git_repo': {u'name': u'clone_git_repo', u'on-error': u'clone_git_repo_set_status_failed', u'on-success': u'upload_templates_directory', u'version': u'2.0', u'action': u'tripleo.git.clone container=<% $.container %> url=<% $.source_url %>', u'type': u'direct'}, u'add_root_stack_name': {u'name': u'add_root_stack_name', u'on-error': u'notify_zaqar', u'on-success': u'container_images_prepare', u'publish-on-error': {u'status': u'FAILED', u'message': u'<% task().result %>'}, u'version': u'2.0', u'action': u'tripleo.parameters.update', u'input': {u'container': u'<% $.container %>', u'parameters': {u'RootStackName': u'<% $.container %>'}}, u'type': u'direct'}, u'upload_default_templates': {u'name': u'upload_default_templates', u'on-error': u'upload_to_container_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %>', u'type': u'direct'}, u'clone_git_repo_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'clone_git_repo_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(clone_git_repo).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_to_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_to_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_default_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_templates_directory': {u'name': u'upload_templates_directory', u'on-error': u'upload_templates_directory_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>', u'on-complete': u'cleanup_temporary_files', u'type': u'direct'}, u'container_images_prepare_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'container_images_prepare_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(container_images_prepare).result %>'}, u'on-success': u'notify_zaqar'}, u'create_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'create_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(create_container).result %>'}, u'on-success': u'notify_zaqar'}, u'set_status_success': {u'version': u'2.0', u'type': u'direct', u'name': u'set_status_success', u'publish': {u'status': u'SUCCESS', u'message': u'Plan created.'}, u'on-success': u'notify_zaqar'}, u'verify_container_doesnt_exist': {u'name': u'verify_container_doesnt_exist', u'on-error': u'create_container', u'on-success': u'notify_zaqar', u'publish': {u'status': u'FAILED', u'message': u'Unable to create plan. The Swift container already exists'}, u'version': u'2.0', u'action': u'swift.head_container container=<% $.container %>', u'type': u'direct'}, u'notify_zaqar': {u'retry': u'count=5 delay=1', u'name': u'notify_zaqar', u'on-success': [{u'fail': u'<% $.get(\\'status\\') = \"FAILED\" %>'}], u'version': u'2.0', u'action': u'zaqar.queue_post', u'input': {u'queue_name': u'<% $.queue_name %>', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'<% $.status %>', u'message': u\"<% $.get('message', '') %>\", u'execution': u'<% execution() %>'}}}}, u'type': u'direct'}, u'cleanup_temporary_files': {u'action': u'tripleo.git.clean container=<% $.container %>', u'version': u'2.0', u'type': u'direct', u'name': u'cleanup_temporary_files'}, u'create_container': {u'name': u'create_container', u'on-error': u'create_container_set_status_failed', u'on-success': u'templates_source_check', u'version': u'2.0', u'action': u'tripleo.plan.create_container container=<% $.container %>', u'type': u'direct'}, u'ensure_passwords_exist_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'ensure_passwords_exist_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(ensure_passwords_exist).result %>'}, u'on-success': u'notify_zaqar'}}, u'name': u'create_deployment_plan', u'tags': [u'tripleo-common-managed'], u'version': u'2.0', u'input': [u'container', {u'source_url': None}, {u'queue_name': u'tripleo'}, {u'generate_passwords': True}, {u'use_default_templates': False}], u'description': u'This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\n'}, u'params': {u'namespace': u''}, u'input': {u'generate_passwords': True, u'use_default_templates': True, u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'container': u'overcloud', u'source_url': None}, u'id': u'3d2a008d-d243-470f-917b-9c07c49e7c3f'}}}}}']\n ZaqarAction.queue_post failed: Error response from Zaqar. Code: 500. Title: Internal server error. Description: (pymysql.err.ProgrammingError) (1146, u\"Table 'zaqar.Queues' doesn't exist\") [SQL: u'SELECT `Queues`.metadata \\nFROM `Queues` \\nWHERE `Queues`.project = %(project_1)s AND `Queues`.name = %(name_1)s'] [parameters: {u'name_1': '2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'project_1': u'13835fbb8e0947a9b3fa174b9a22cdb9'}] (Background on this error at: http://sqlalche.me/e/f405).\n [action_ex_id=cd52422f-5aec-4880-9286-a3885fbbab1b, idx=5]: Failed to run action [action_ex_id=cd52422f-5aec-4880-9286-a3885fbbab1b, action_cls='<class 'mistral.actions.action_factory.ZaqarAction'>', attributes='{u'client_method_name': u'queue_post'}', params='{u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'SUCCESS', u'message': u'Plan created.', u'execution': {u'name': u'tripleo.plan_management.v1.create_deployment_plan', u'created_at': u'2018-06-26 04:26:49', u'updated_at': u'2018-06-26 04:26:49', u'spec': {u'tasks': {u'templates_source_check': {u'version': u'2.0', u'type': u'direct', u'name': u'templates_source_check', u'on-success': [{u'upload_default_templates': u'<% $.use_default_templates = true %>'}, {u'clone_git_repo': u'<% $.source_url != null %>'}]}, u'create_plan': {u'version': u'2.0', u'type': u'direct', u'name': u'create_plan', u'on-success': [{u'ensure_passwords_exist': u'<% $.generate_passwords = true %>'}, {u'add_root_stack_name': u'<% $.generate_passwords != true %>'}]}, u'upload_templates_directory_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_templates_directory_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_templates_directory).result %>'}, u'on-success': u'notify_zaqar'}, u'ensure_passwords_exist': {u'name': u'ensure_passwords_exist', u'on-error': u'ensure_passwords_exist_set_status_failed', u'on-success': u'add_root_stack_name', u'version': u'2.0', u'action': u'tripleo.parameters.generate_passwords container=<% $.container %>', u'type': u'direct'}, u'process_templates_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'process_templates_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(process_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'container_images_prepare': {u'name': u'container_images_prepare', u'on-error': u'container_images_prepare_set_status_failed', u'on-success': u'process_templates', u'version': u'2.0', u'action': u'tripleo.container_images.prepare container=<% $.container %>', u'type': u'direct', u'description': u'Populate all container image parameters with default values.\\n'}, u'process_templates': {u'name': u'process_templates', u'on-error': u'process_templates_set_status_failed', u'on-success': u'set_status_success', u'version': u'2.0', u'action': u'tripleo.templates.process container=<% $.container %>', u'type': u'direct'}, u'container_required_check': {u'on-success': [{u'verify_container_doesnt_exist': u'<% $.use_default_templates or $.source_url %>'}, {u'create_plan': u'<% $.use_default_templates = false and $.source_url = null %>'}], u'version': u'2.0', u'type': u'direct', u'description': u'If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\n', u'name': u'container_required_check'}, u'clone_git_repo': {u'name': u'clone_git_repo', u'on-error': u'clone_git_repo_set_status_failed', u'on-success': u'upload_templates_directory', u'version': u'2.0', u'action': u'tripleo.git.clone container=<% $.container %> url=<% $.source_url %>', u'type': u'direct'}, u'add_root_stack_name': {u'name': u'add_root_stack_name', u'on-error': u'notify_zaqar', u'on-success': u'container_images_prepare', u'publish-on-error': {u'status': u'FAILED', u'message': u'<% task().result %>'}, u'version': u'2.0', u'action': u'tripleo.parameters.update', u'input': {u'container': u'<% $.container %>', u'parameters': {u'RootStackName': u'<% $.container %>'}}, u'type': u'direct'}, u'upload_default_templates': {u'name': u'upload_default_templates', u'on-error': u'upload_to_container_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %>', u'type': u'direct'}, u'clone_git_repo_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'clone_git_repo_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(clone_git_repo).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_to_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_to_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_default_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_templates_directory': {u'name': u'upload_templates_directory', u'on-error': u'upload_templates_directory_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>', u'on-complete': u'cleanup_temporary_files', u'type': u'direct'}, u'container_images_prepare_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'container_images_prepare_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(container_images_prepare).result %>'}, u'on-success': u'notify_zaqar'}, u'create_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'create_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(create_container).result %>'}, u'on-success': u'notify_zaqar'}, u'set_status_success': {u'version': u'2.0', u'type': u'direct', u'name': u'set_status_success', u'publish': {u'status': u'SUCCESS', u'message': u'Plan created.'}, u'on-success': u'notify_zaqar'}, u'verify_container_doesnt_exist': {u'name': u'verify_container_doesnt_exist', u'on-error': u'create_container', u'on-success': u'notify_zaqar', u'publish': {u'status': u'FAILED', u'message': u'Unable to create plan. The Swift container already exists'}, u'version': u'2.0', u'action': u'swift.head_container container=<% $.container %>', u'type': u'direct'}, u'notify_zaqar': {u'retry': u'count=5 delay=1', u'name': u'notify_zaqar', u'on-success': [{u'fail': u'<% $.get(\\'status\\') = \"FAILED\" %>'}], u'version': u'2.0', u'action': u'zaqar.queue_post', u'input': {u'queue_name': u'<% $.queue_name %>', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'<% $.status %>', u'message': u\"<% $.get('message', '') %>\", u'execution': u'<% execution() %>'}}}}, u'type': u'direct'}, u'cleanup_temporary_files': {u'action': u'tripleo.git.clean container=<% $.container %>', u'version': u'2.0', u'type': u'direct', u'name': u'cleanup_temporary_files'}, u'create_container': {u'name': u'create_container', u'on-error': u'create_container_set_status_failed', u'on-success': u'templates_source_check', u'version': u'2.0', u'action': u'tripleo.plan.create_container container=<% $.container %>', u'type': u'direct'}, u'ensure_passwords_exist_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'ensure_passwords_exist_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(ensure_passwords_exist).result %>'}, u'on-success': u'notify_zaqar'}}, u'name': u'create_deployment_plan', u'tags': [u'tripleo-common-managed'], u'version': u'2.0', u'input': [u'container', {u'source_url': None}, {u'queue_name': u'tripleo'}, {u'generate_passwords': True}, {u'use_default_templates': False}], u'description': u'This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\n'}, u'params': {u'namespace': u''}, u'input': {u'generate_passwords': True, u'use_default_templates': True, u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'container': u'overcloud', u'source_url': None}, u'id': u'3d2a008d-d243-470f-917b-9c07c49e7c3f'}}}}}']\n ZaqarAction.queue_post failed: Error response from Zaqar. Code: 500. Title: Internal server error. Description: (pymysql.err.ProgrammingError) (1146, u\"Table 'zaqar.Queues' doesn't exist\") [SQL: u'SELECT `Queues`.metadata \\nFROM `Queues` \\nWHERE `Queues`.project = %(project_1)s AND `Queues`.name = %(name_1)s'] [parameters: {u'name_1': '2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'project_1': u'13835fbb8e0947a9b3fa174b9a22cdb9'}] (Background on this error at: http://sqlalche.me/e/f405).\n", "description": "", "state": "ERROR", "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": null, "updated_at": "2018-06-26 04:27:51", "workflow_id": "f4a9f0ce-b6ab-4ced-be9e-528e0c527e31", "params": "{\"namespace\": \"\"}", "workflow_namespace": "", "output": "{\"status\": \"SUCCESS\", \"message\": \"Plan created.\", \"result\": \"Failure caused by error in tasks: notify_zaqar\\n\\n notify_zaqar [task_ex_id=1a2b25c9-3deb-40dc-90de-d8d2dd80e1ba] -> Failed to run action [action_ex_id=cd52422f-5aec-4880-9286-a3885fbbab1b, action_cls='<class 'mistral.actions.action_factory.ZaqarAction'>', attributes='{u'client_method_name': u'queue_post'}', params='{u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'SUCCESS', u'message': u'Plan created.', u'execution': {u'name': u'tripleo.plan_management.v1.create_deployment_plan', u'created_at': u'2018-06-26 04:26:49', u'updated_at': u'2018-06-26 04:26:49', u'spec': {u'tasks': {u'templates_source_check': {u'version': u'2.0', u'type': u'direct', u'name': u'templates_source_check', u'on-success': [{u'upload_default_templates': u'<% $.use_default_templates = true %>'}, {u'clone_git_repo': u'<% $.source_url != null %>'}]}, u'create_plan': {u'version': u'2.0', u'type': u'direct', u'name': u'create_plan', u'on-success': [{u'ensure_passwords_exist': u'<% $.generate_passwords = true %>'}, {u'add_root_stack_name': u'<% $.generate_passwords != true %>'}]}, u'upload_templates_directory_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_templates_directory_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_templates_directory).result %>'}, u'on-success': u'notify_zaqar'}, u'ensure_passwords_exist': {u'name': u'ensure_passwords_exist', u'on-error': u'ensure_passwords_exist_set_status_failed', u'on-success': u'add_root_stack_name', u'version': u'2.0', u'action': u'tripleo.parameters.generate_passwords container=<% $.container %>', u'type': u'direct'}, u'process_templates_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'process_templates_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(process_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'container_images_prepare': {u'name': u'container_images_prepare', u'on-error': u'container_images_prepare_set_status_failed', u'on-success': u'process_templates', u'version': u'2.0', u'action': u'tripleo.container_images.prepare container=<% $.container %>', u'type': u'direct', u'description': u'Populate all container image parameters with default values.\\\\n'}, u'process_templates': {u'name': u'process_templates', u'on-error': u'process_templates_set_status_failed', u'on-success': u'set_status_success', u'version': u'2.0', u'action': u'tripleo.templates.process container=<% $.container %>', u'type': u'direct'}, u'container_required_check': {u'on-success': [{u'verify_container_doesnt_exist': u'<% $.use_default_templates or $.source_url %>'}, {u'create_plan': u'<% $.use_default_templates = false and $.source_url = null %>'}], u'version': u'2.0', u'type': u'direct', u'description': u'If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\\\n', u'name': u'container_required_check'}, u'clone_git_repo': {u'name': u'clone_git_repo', u'on-error': u'clone_git_repo_set_status_failed', u'on-success': u'upload_templates_directory', u'version': u'2.0', u'action': u'tripleo.git.clone container=<% $.container %> url=<% $.source_url %>', u'type': u'direct'}, u'add_root_stack_name': {u'name': u'add_root_stack_name', u'on-error': u'notify_zaqar', u'on-success': u'container_images_prepare', u'publish-on-error': {u'status': u'FAILED', u'message': u'<% task().result %>'}, u'version': u'2.0', u'action': u'tripleo.parameters.update', u'input': {u'container': u'<% $.container %>', u'parameters': {u'RootStackName': u'<% $.container %>'}}, u'type': u'direct'}, u'upload_default_templates': {u'name': u'upload_default_templates', u'on-error': u'upload_to_container_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %>', u'type': u'direct'}, u'clone_git_repo_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'clone_git_repo_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(clone_git_repo).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_to_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_to_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_default_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_templates_directory': {u'name': u'upload_templates_directory', u'on-error': u'upload_templates_directory_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>', u'on-complete': u'cleanup_temporary_files', u'type': u'direct'}, u'container_images_prepare_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'container_images_prepare_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(container_images_prepare).result %>'}, u'on-success': u'notify_zaqar'}, u'create_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'create_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(create_container).result %>'}, u'on-success': u'notify_zaqar'}, u'set_status_success': {u'version': u'2.0', u'type': u'direct', u'name': u'set_status_success', u'publish': {u'status': u'SUCCESS', u'message': u'Plan created.'}, u'on-success': u'notify_zaqar'}, u'verify_container_doesnt_exist': {u'name': u'verify_container_doesnt_exist', u'on-error': u'create_container', u'on-success': u'notify_zaqar', u'publish': {u'status': u'FAILED', u'message': u'Unable to create plan. The Swift container already exists'}, u'version': u'2.0', u'action': u'swift.head_container container=<% $.container %>', u'type': u'direct'}, u'notify_zaqar': {u'retry': u'count=5 delay=1', u'name': u'notify_zaqar', u'on-success': [{u'fail': u'<% $.get(\\\\'status\\\\') = \\\"FAILED\\\" %>'}], u'version': u'2.0', u'action': u'zaqar.queue_post', u'input': {u'queue_name': u'<% $.queue_name %>', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'<% $.status %>', u'message': u\\\"<% $.get('message', '') %>\\\", u'execution': u'<% execution() %>'}}}}, u'type': u'direct'}, u'cleanup_temporary_files': {u'action': u'tripleo.git.clean container=<% $.container %>', u'version': u'2.0', u'type': u'direct', u'name': u'cleanup_temporary_files'}, u'create_container': {u'name': u'create_container', u'on-error': u'create_container_set_status_failed', u'on-success': u'templates_source_check', u'version': u'2.0', u'action': u'tripleo.plan.create_container container=<% $.container %>', u'type': u'direct'}, u'ensure_passwords_exist_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'ensure_passwords_exist_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(ensure_passwords_exist).result %>'}, u'on-success': u'notify_zaqar'}}, u'name': u'create_deployment_plan', u'tags': [u'tripleo-common-managed'], u'version': u'2.0', u'input': [u'container', {u'source_url': None}, {u'queue_name': u'tripleo'}, {u'generate_passwords': True}, {u'use_default_templates': False}], u'description': u'This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\\\n'}, u'params': {u'namespace': u''}, u'input': {u'generate_passwords': True, u'use_default_templates': True, u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'container': u'overcloud', u'source_url': None}, u'id': u'3d2a008d-d243-470f-917b-9c07c49e7c3f'}}}}}']\\n ZaqarAction.queue_post failed: Error response from Zaqar. Code: 500. Title: Internal server error. Description: (pymysql.err.ProgrammingError) (1146, u\\\"Table 'zaqar.Queues' doesn't exist\\\") [SQL: u'SELECT `Queues`.metadata \\\\nFROM `Queues` \\\\nWHERE `Queues`.project = %(project_1)s AND `Queues`.name = %(name_1)s'] [parameters: {u'name_1': '2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'project_1': u'13835fbb8e0947a9b3fa174b9a22cdb9'}] (Background on this error at: http://sqlalche.me/e/f405).\\n [action_ex_id=44598c23-bf24-41d7-aa2a-a4e8dcc4026e, idx=0]: Failed to run action [action_ex_id=44598c23-bf24-41d7-aa2a-a4e8dcc4026e, action_cls='<class 'mistral.actions.action_factory.ZaqarAction'>', attributes='{u'client_method_name': u'queue_post'}', params='{u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'SUCCESS', u'message': u'Plan created.', u'execution': {u'name': u'tripleo.plan_management.v1.create_deployment_plan', u'created_at': u'2018-06-26 04:26:49', u'updated_at': u'2018-06-26 04:26:49', u'spec': {u'tasks': {u'templates_source_check': {u'version': u'2.0', u'type': u'direct', u'name': u'templates_source_check', u'on-success': [{u'upload_default_templates': u'<% $.use_default_templates = true %>'}, {u'clone_git_repo': u'<% $.source_url != null %>'}]}, u'create_plan': {u'version': u'2.0', u'type': u'direct', u'name': u'create_plan', u'on-success': [{u'ensure_passwords_exist': u'<% $.generate_passwords = true %>'}, {u'add_root_stack_name': u'<% $.generate_passwords != true %>'}]}, u'upload_templates_directory_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_templates_directory_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_templates_directory).result %>'}, u'on-success': u'notify_zaqar'}, u'ensure_passwords_exist': {u'name': u'ensure_passwords_exist', u'on-error': u'ensure_passwords_exist_set_status_failed', u'on-success': u'add_root_stack_name', u'version': u'2.0', u'action': u'tripleo.parameters.generate_passwords container=<% $.container %>', u'type': u'direct'}, u'process_templates_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'process_templates_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(process_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'container_images_prepare': {u'name': u'container_images_prepare', u'on-error': u'container_images_prepare_set_status_failed', u'on-success': u'process_templates', u'version': u'2.0', u'action': u'tripleo.container_images.prepare container=<% $.container %>', u'type': u'direct', u'description': u'Populate all container image parameters with default values.\\\\n'}, u'process_templates': {u'name': u'process_templates', u'on-error': u'process_templates_set_status_failed', u'on-success': u'set_status_success', u'version': u'2.0', u'action': u'tripleo.templates.process container=<% $.container %>', u'type': u'direct'}, u'container_required_check': {u'on-success': [{u'verify_container_doesnt_exist': u'<% $.use_default_templates or $.source_url %>'}, {u'create_plan': u'<% $.use_default_templates = false and $.source_url = null %>'}], u'version': u'2.0', u'type': u'direct', u'description': u'If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\\\n', u'name': u'container_required_check'}, u'clone_git_repo': {u'name': u'clone_git_repo', u'on-error': u'clone_git_repo_set_status_failed', u'on-success': u'upload_templates_directory', u'version': u'2.0', u'action': u'tripleo.git.clone container=<% $.container %> url=<% $.source_url %>', u'type': u'direct'}, u'add_root_stack_name': {u'name': u'add_root_stack_name', u'on-error': u'notify_zaqar', u'on-success': u'container_images_prepare', u'publish-on-error': {u'status': u'FAILED', u'message': u'<% task().result %>'}, u'version': u'2.0', u'action': u'tripleo.parameters.update', u'input': {u'container': u'<% $.container %>', u'parameters': {u'RootStackName': u'<% $.container %>'}}, u'type': u'direct'}, u'upload_default_templates': {u'name': u'upload_default_templates', u'on-error': u'upload_to_container_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %>', u'type': u'direct'}, u'clone_git_repo_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'clone_git_repo_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(clone_git_repo).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_to_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_to_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_default_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_templates_directory': {u'name': u'upload_templates_directory', u'on-error': u'upload_templates_directory_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>', u'on-complete': u'cleanup_temporary_files', u'type': u'direct'}, u'container_images_prepare_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'container_images_prepare_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(container_images_prepare).result %>'}, u'on-success': u'notify_zaqar'}, u'create_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'create_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(create_container).result %>'}, u'on-success': u'notify_zaqar'}, u'set_status_success': {u'version': u'2.0', u'type': u'direct', u'name': u'set_status_success', u'publish': {u'status': u'SUCCESS', u'message': u'Plan created.'}, u'on-success': u'notify_zaqar'}, u'verify_container_doesnt_exist': {u'name': u'verify_container_doesnt_exist', u'on-error': u'create_container', u'on-success': u'notify_zaqar', u'publish': {u'status': u'FAILED', u'message': u'Unable to create plan. The Swift container already exists'}, u'version': u'2.0', u'action': u'swift.head_container container=<% $.container %>', u'type': u'direct'}, u'notify_zaqar': {u'retry': u'count=5 delay=1', u'name': u'notify_zaqar', u'on-success': [{u'fail': u'<% $.get(\\\\'status\\\\') = \\\"FAILED\\\" %>'}], u'version': u'2.0', u'action': u'zaqar.queue_post', u'input': {u'queue_name': u'<% $.queue_name %>', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'<% $.status %>', u'message': u\\\"<% $.get('message', '') %>\\\", u'execution': u'<% execution() %>'}}}}, u'type': u'direct'}, u'cleanup_temporary_files': {u'action': u'tripleo.git.clean container=<% $.container %>', u'version': u'2.0', u'type': u'direct', u'name': u'cleanup_temporary_files'}, u'create_container': {u'name': u'create_container', u'on-error': u'create_container_set_status_failed', u'on-success': u'templates_source_check', u'version': u'2.0', u'action': u'tripleo.plan.create_container container=<% $.container %>', u'type': u'direct'}, u'ensure_passwords_exist_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'ensure_passwords_exist_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(ensure_passwords_exist).result %>'}, u'on-success': u'notify_zaqar'}}, u'name': u'create_deployment_plan', u'tags': [u'tripleo-common-managed'], u'version': u'2.0', u'input': [u'container', {u'source_url': None}, {u'queue_name': u'tripleo'}, {u'generate_passwords': True}, {u'use_default_templates': False}], u'description': u'This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\\\n'}, u'params': {u'namespace': u''}, u'input': {u'generate_passwords': True, u'use_default_templates': True, u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'container': u'overcloud', u'source_url': None}, u'id': u'3d2a008d-d243-470f-917b-9c07c49e7c3f'}}}}}']\\n ZaqarAction.queue_post failed: Error response from Zaqar. Code: 500. Title: Internal server error. Description: (pymysql.err.ProgrammingError) (1146, u\\\"Table 'zaqar.Queues' doesn't exist\\\") [SQL: u'SELECT `Queues`.metadata \\\\nFROM `Queues` \\\\nWHERE `Queues`.project = %(project_1)s AND `Queues`.name = %(name_1)s'] [parameters: {u'name_1': '2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'project_1': u'13835fbb8e0947a9b3fa174b9a22cdb9'}] (Background on this error at: http://sqlalche.me/e/f405).\\n [action_ex_id=5331a8c3-aca1-4e37-a559-f93896d0fe2a, idx=1]: Failed to run action [action_ex_id=5331a8c3-aca1-4e37-a559-f93896d0fe2a, action_cls='<class 'mistral.actions.action_factory.ZaqarAction'>', attributes='{u'client_method_name': u'queue_post'}', params='{u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'SUCCESS', u'message': u'Plan created.', u'execution': {u'name': u'tripleo.plan_management.v1.create_deployment_plan', u'created_at': u'2018-06-26 04:26:49', u'updated_at': u'2018-06-26 04:26:49', u'spec': {u'tasks': {u'templates_source_check': {u'version': u'2.0', u'type': u'direct', u'name': u'templates_source_check', u'on-success': [{u'upload_default_templates': u'<% $.use_default_templates = true %>'}, {u'clone_git_repo': u'<% $.source_url != null %>'}]}, u'create_plan': {u'version': u'2.0', u'type': u'direct', u'name': u'create_plan', u'on-success': [{u'ensure_passwords_exist': u'<% $.generate_passwords = true %>'}, {u'add_root_stack_name': u'<% $.generate_passwords != true %>'}]}, u'upload_templates_directory_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_templates_directory_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_templates_directory).result %>'}, u'on-success': u'notify_zaqar'}, u'ensure_passwords_exist': {u'name': u'ensure_passwords_exist', u'on-error': u'ensure_passwords_exist_set_status_failed', u'on-success': u'add_root_stack_name', u'version': u'2.0', u'action': u'tripleo.parameters.generate_passwords container=<% $.container %>', u'type': u'direct'}, u'process_templates_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'process_templates_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(process_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'container_images_prepare': {u'name': u'container_images_prepare', u'on-error': u'container_images_prepare_set_status_failed', u'on-success': u'process_templates', u'version': u'2.0', u'action': u'tripleo.container_images.prepare container=<% $.container %>', u'type': u'direct', u'description': u'Populate all container image parameters with default values.\\\\n'}, u'process_templates': {u'name': u'process_templates', u'on-error': u'process_templates_set_status_failed', u'on-success': u'set_status_success', u'version': u'2.0', u'action': u'tripleo.templates.process container=<% $.container %>', u'type': u'direct'}, u'container_required_check': {u'on-success': [{u'verify_container_doesnt_exist': u'<% $.use_default_templates or $.source_url %>'}, {u'create_plan': u'<% $.use_default_templates = false and $.source_url = null %>'}], u'version': u'2.0', u'type': u'direct', u'description': u'If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\\\n', u'name': u'container_required_check'}, u'clone_git_repo': {u'name': u'clone_git_repo', u'on-error': u'clone_git_repo_set_status_failed', u'on-success': u'upload_templates_directory', u'version': u'2.0', u'action': u'tripleo.git.clone container=<% $.container %> url=<% $.source_url %>', u'type': u'direct'}, u'add_root_stack_name': {u'name': u'add_root_stack_name', u'on-error': u'notify_zaqar', u'on-success': u'container_images_prepare', u'publish-on-error': {u'status': u'FAILED', u'message': u'<% task().result %>'}, u'version': u'2.0', u'action': u'tripleo.parameters.update', u'input': {u'container': u'<% $.container %>', u'parameters': {u'RootStackName': u'<% $.container %>'}}, u'type': u'direct'}, u'upload_default_templates': {u'name': u'upload_default_templates', u'on-error': u'upload_to_container_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %>', u'type': u'direct'}, u'clone_git_repo_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'clone_git_repo_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(clone_git_repo).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_to_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_to_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_default_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_templates_directory': {u'name': u'upload_templates_directory', u'on-error': u'upload_templates_directory_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>', u'on-complete': u'cleanup_temporary_files', u'type': u'direct'}, u'container_images_prepare_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'container_images_prepare_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(container_images_prepare).result %>'}, u'on-success': u'notify_zaqar'}, u'create_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'create_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(create_container).result %>'}, u'on-success': u'notify_zaqar'}, u'set_status_success': {u'version': u'2.0', u'type': u'direct', u'name': u'set_status_success', u'publish': {u'status': u'SUCCESS', u'message': u'Plan created.'}, u'on-success': u'notify_zaqar'}, u'verify_container_doesnt_exist': {u'name': u'verify_container_doesnt_exist', u'on-error': u'create_container', u'on-success': u'notify_zaqar', u'publish': {u'status': u'FAILED', u'message': u'Unable to create plan. The Swift container already exists'}, u'version': u'2.0', u'action': u'swift.head_container container=<% $.container %>', u'type': u'direct'}, u'notify_zaqar': {u'retry': u'count=5 delay=1', u'name': u'notify_zaqar', u'on-success': [{u'fail': u'<% $.get(\\\\'status\\\\') = \\\"FAILED\\\" %>'}], u'version': u'2.0', u'action': u'zaqar.queue_post', u'input': {u'queue_name': u'<% $.queue_name %>', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'<% $.status %>', u'message': u\\\"<% $.get('message', '') %>\\\", u'execution': u'<% execution() %>'}}}}, u'type': u'direct'}, u'cleanup_temporary_files': {u'action': u'tripleo.git.clean container=<% $.container %>', u'version': u'2.0', u'type': u'direct', u'name': u'cleanup_temporary_files'}, u'create_container': {u'name': u'create_container', u'on-error': u'create_container_set_status_failed', u'on-success': u'templates_source_check', u'version': u'2.0', u'action': u'tripleo.plan.create_container container=<% $.container %>', u'type': u'direct'}, u'ensure_passwords_exist_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'ensure_passwords_exist_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(ensure_passwords_exist).result %>'}, u'on-success': u'notify_zaqar'}}, u'name': u'create_deployment_plan', u'tags': [u'tripleo-common-managed'], u'version': u'2.0', u'input': [u'container', {u'source_url': None}, {u'queue_name': u'tripleo'}, {u'generate_passwords': True}, {u'use_default_templates': False}], u'description': u'This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\\\n'}, u'params': {u'namespace': u''}, u'input': {u'generate_passwords': True, u'use_default_templates': True, u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'container': u'overcloud', u'source_url': None}, u'id': u'3d2a008d-d243-470f-917b-9c07c49e7c3f'}}}}}']\\n ZaqarAction.queue_post failed: Error response from Zaqar. Code: 500. Title: Internal server error. Description: (pymysql.err.ProgrammingError) (1146, u\\\"Table 'zaqar.Queues' doesn't exist\\\") [SQL: u'SELECT `Queues`.metadata \\\\nFROM `Queues` \\\\nWHERE `Queues`.project = %(project_1)s AND `Queues`.name = %(name_1)s'] [parameters: {u'name_1': '2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'project_1': u'13835fbb8e0947a9b3fa174b9a22cdb9'}] (Background on this error at: http://sqlalche.me/e/f405).\\n [action_ex_id=62fe26ef-36b6-4b29-ae9e-47f56efad77b, idx=2]: Failed to run action [action_ex_id=62fe26ef-36b6-4b29-ae9e-47f56efad77b, action_cls='<class 'mistral.actions.action_factory.ZaqarAction'>', attributes='{u'client_method_name': u'queue_post'}', params='{u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'SUCCESS', u'message': u'Plan created.', u'execution': {u'name': u'tripleo.plan_management.v1.create_deployment_plan', u'created_at': u'2018-06-26 04:26:49', u'updated_at': u'2018-06-26 04:26:49', u'spec': {u'tasks': {u'templates_source_check': {u'version': u'2.0', u'type': u'direct', u'name': u'templates_source_check', u'on-success': [{u'upload_default_templates': u'<% $.use_default_templates = true %>'}, {u'clone_git_repo': u'<% $.source_url != null %>'}]}, u'create_plan': {u'version': u'2.0', u'type': u'direct', u'name': u'create_plan', u'on-success': [{u'ensure_passwords_exist': u'<% $.generate_passwords = true %>'}, {u'add_root_stack_name': u'<% $.generate_passwords != true %>'}]}, u'upload_templates_directory_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_templates_directory_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_templates_directory).result %>'}, u'on-success': u'notify_zaqar'}, u'ensure_passwords_exist': {u'name': u'ensure_passwords_exist', u'on-error': u'ensure_passwords_exist_set_status_failed', u'on-success': u'add_root_stack_name', u'version': u'2.0', u'action': u'tripleo.parameters.generate_passwords container=<% $.container %>', u'type': u'direct'}, u'process_templates_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'process_templates_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(process_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'container_images_prepare': {u'name': u'container_images_prepare', u'on-error': u'container_images_prepare_set_status_failed', u'on-success': u'process_templates', u'version': u'2.0', u'action': u'tripleo.container_images.prepare container=<% $.container %>', u'type': u'direct', u'description': u'Populate all container image parameters with default values.\\\\n'}, u'process_templates': {u'name': u'process_templates', u'on-error': u'process_templates_set_status_failed', u'on-success': u'set_status_success', u'version': u'2.0', u'action': u'tripleo.templates.process container=<% $.container %>', u'type': u'direct'}, u'container_required_check': {u'on-success': [{u'verify_container_doesnt_exist': u'<% $.use_default_templates or $.source_url %>'}, {u'create_plan': u'<% $.use_default_templates = false and $.source_url = null %>'}], u'version': u'2.0', u'type': u'direct', u'description': u'If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\\\n', u'name': u'container_required_check'}, u'clone_git_repo': {u'name': u'clone_git_repo', u'on-error': u'clone_git_repo_set_status_failed', u'on-success': u'upload_templates_directory', u'version': u'2.0', u'action': u'tripleo.git.clone container=<% $.container %> url=<% $.source_url %>', u'type': u'direct'}, u'add_root_stack_name': {u'name': u'add_root_stack_name', u'on-error': u'notify_zaqar', u'on-success': u'container_images_prepare', u'publish-on-error': {u'status': u'FAILED', u'message': u'<% task().result %>'}, u'version': u'2.0', u'action': u'tripleo.parameters.update', u'input': {u'container': u'<% $.container %>', u'parameters': {u'RootStackName': u'<% $.container %>'}}, u'type': u'direct'}, u'upload_default_templates': {u'name': u'upload_default_templates', u'on-error': u'upload_to_container_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %>', u'type': u'direct'}, u'clone_git_repo_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'clone_git_repo_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(clone_git_repo).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_to_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_to_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_default_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_templates_directory': {u'name': u'upload_templates_directory', u'on-error': u'upload_templates_directory_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>', u'on-complete': u'cleanup_temporary_files', u'type': u'direct'}, u'container_images_prepare_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'container_images_prepare_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(container_images_prepare).result %>'}, u'on-success': u'notify_zaqar'}, u'create_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'create_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(create_container).result %>'}, u'on-success': u'notify_zaqar'}, u'set_status_success': {u'version': u'2.0', u'type': u'direct', u'name': u'set_status_success', u'publish': {u'status': u'SUCCESS', u'message': u'Plan created.'}, u'on-success': u'notify_zaqar'}, u'verify_container_doesnt_exist': {u'name': u'verify_container_doesnt_exist', u'on-error': u'create_container', u'on-success': u'notify_zaqar', u'publish': {u'status': u'FAILED', u'message': u'Unable to create plan. The Swift container already exists'}, u'version': u'2.0', u'action': u'swift.head_container container=<% $.container %>', u'type': u'direct'}, u'notify_zaqar': {u'retry': u'count=5 delay=1', u'name': u'notify_zaqar', u'on-success': [{u'fail': u'<% $.get(\\\\'status\\\\') = \\\"FAILED\\\" %>'}], u'version': u'2.0', u'action': u'zaqar.queue_post', u'input': {u'queue_name': u'<% $.queue_name %>', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'<% $.status %>', u'message': u\\\"<% $.get('message', '') %>\\\", u'execution': u'<% execution() %>'}}}}, u'type': u'direct'}, u'cleanup_temporary_files': {u'action': u'tripleo.git.clean container=<% $.container %>', u'version': u'2.0', u'type': u'direct', u'name': u'cleanup_temporary_files'}, u'create_container': {u'name': u'create_container', u'on-error': u'create_container_set_status_failed', u'on-success': u'templates_source_check', u'version': u'2.0', u'action': u'tripleo.plan.create_container container=<% $.container %>', u'type': u'direct'}, u'ensure_passwords_exist_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'ensure_passwords_exist_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(ensure_passwords_exist).result %>'}, u'on-success': u'notify_zaqar'}}, u'name': u'create_deployment_plan', u'tags': [u'tripleo-common-managed'], u'version': u'2.0', u'input': [u'container', {u'source_url': None}, {u'queue_name': u'tripleo'}, {u'generate_passwords': True}, {u'use_default_templates': False}], u'description': u'This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\\\n'}, u'params': {u'namespace': u''}, u'input': {u'generate_passwords': True, u'use_default_templates': True, u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'container': u'overcloud', u'source_url': None}, u'id': u'3d2a008d-d243-470f-917b-9c07c49e7c3f'}}}}}']\\n ZaqarAction.queue_post failed: Error response from Zaqar. Code: 500. Title: Internal server error. Description: (pymysql.err.ProgrammingError) (1146, u\\\"Table 'zaqar.Queues' doesn't exist\\\") [SQL: u'SELECT `Queues`.metadata \\\\nFROM `Queues` \\\\nWHERE `Queues`.project = %(project_1)s AND `Queues`.name = %(name_1)s'] [parameters: {u'name_1': '2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'project_1': u'13835fbb8e0947a9b3fa174b9a22cdb9'}] (Background on this error at: http://sqlalche.me/e/f405).\\n [action_ex_id=6acf6f9e-680b-4e4d-acc2-d905440e1822, idx=3]: Failed to run action [action_ex_id=6acf6f9e-680b-4e4d-acc2-d905440e1822, action_cls='<class 'mistral.actions.action_factory.ZaqarAction'>', attributes='{u'client_method_name': u'queue_post'}', params='{u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'SUCCESS', u'message': u'Plan created.', u'execution': {u'name': u'tripleo.plan_management.v1.create_deployment_plan', u'created_at': u'2018-06-26 04:26:49', u'updated_at': u'2018-06-26 04:26:49', u'spec': {u'tasks': {u'templates_source_check': {u'version': u'2.0', u'type': u'direct', u'name': u'templates_source_check', u'on-success': [{u'upload_default_templates': u'<% $.use_default_templates = true %>'}, {u'clone_git_repo': u'<% $.source_url != null %>'}]}, u'create_plan': {u'version': u'2.0', u'type': u'direct', u'name': u'create_plan', u'on-success': [{u'ensure_passwords_exist': u'<% $.generate_passwords = true %>'}, {u'add_root_stack_name': u'<% $.generate_passwords != true %>'}]}, u'upload_templates_directory_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_templates_directory_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_templates_directory).result %>'}, u'on-success': u'notify_zaqar'}, u'ensure_passwords_exist': {u'name': u'ensure_passwords_exist', u'on-error': u'ensure_passwords_exist_set_status_failed', u'on-success': u'add_root_stack_name', u'version': u'2.0', u'action': u'tripleo.parameters.generate_passwords container=<% $.container %>', u'type': u'direct'}, u'process_templates_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'process_templates_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(process_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'container_images_prepare': {u'name': u'container_images_prepare', u'on-error': u'container_images_prepare_set_status_failed', u'on-success': u'process_templates', u'version': u'2.0', u'action': u'tripleo.container_images.prepare container=<% $.container %>', u'type': u'direct', u'description': u'Populate all container image parameters with default values.\\\\n'}, u'process_templates': {u'name': u'process_templates', u'on-error': u'process_templates_set_status_failed', u'on-success': u'set_status_success', u'version': u'2.0', u'action': u'tripleo.templates.process container=<% $.container %>', u'type': u'direct'}, u'container_required_check': {u'on-success': [{u'verify_container_doesnt_exist': u'<% $.use_default_templates or $.source_url %>'}, {u'create_plan': u'<% $.use_default_templates = false and $.source_url = null %>'}], u'version': u'2.0', u'type': u'direct', u'description': u'If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\\\n', u'name': u'container_required_check'}, u'clone_git_repo': {u'name': u'clone_git_repo', u'on-error': u'clone_git_repo_set_status_failed', u'on-success': u'upload_templates_directory', u'version': u'2.0', u'action': u'tripleo.git.clone container=<% $.container %> url=<% $.source_url %>', u'type': u'direct'}, u'add_root_stack_name': {u'name': u'add_root_stack_name', u'on-error': u'notify_zaqar', u'on-success': u'container_images_prepare', u'publish-on-error': {u'status': u'FAILED', u'message': u'<% task().result %>'}, u'version': u'2.0', u'action': u'tripleo.parameters.update', u'input': {u'container': u'<% $.container %>', u'parameters': {u'RootStackName': u'<% $.container %>'}}, u'type': u'direct'}, u'upload_default_templates': {u'name': u'upload_default_templates', u'on-error': u'upload_to_container_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %>', u'type': u'direct'}, u'clone_git_repo_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'clone_git_repo_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(clone_git_repo).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_to_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_to_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_default_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_templates_directory': {u'name': u'upload_templates_directory', u'on-error': u'upload_templates_directory_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>', u'on-complete': u'cleanup_temporary_files', u'type': u'direct'}, u'container_images_prepare_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'container_images_prepare_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(container_images_prepare).result %>'}, u'on-success': u'notify_zaqar'}, u'create_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'create_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(create_container).result %>'}, u'on-success': u'notify_zaqar'}, u'set_status_success': {u'version': u'2.0', u'type': u'direct', u'name': u'set_status_success', u'publish': {u'status': u'SUCCESS', u'message': u'Plan created.'}, u'on-success': u'notify_zaqar'}, u'verify_container_doesnt_exist': {u'name': u'verify_container_doesnt_exist', u'on-error': u'create_container', u'on-success': u'notify_zaqar', u'publish': {u'status': u'FAILED', u'message': u'Unable to create plan. The Swift container already exists'}, u'version': u'2.0', u'action': u'swift.head_container container=<% $.container %>', u'type': u'direct'}, u'notify_zaqar': {u'retry': u'count=5 delay=1', u'name': u'notify_zaqar', u'on-success': [{u'fail': u'<% $.get(\\\\'status\\\\') = \\\"FAILED\\\" %>'}], u'version': u'2.0', u'action': u'zaqar.queue_post', u'input': {u'queue_name': u'<% $.queue_name %>', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'<% $.status %>', u'message': u\\\"<% $.get('message', '') %>\\\", u'execution': u'<% execution() %>'}}}}, u'type': u'direct'}, u'cleanup_temporary_files': {u'action': u'tripleo.git.clean container=<% $.container %>', u'version': u'2.0', u'type': u'direct', u'name': u'cleanup_temporary_files'}, u'create_container': {u'name': u'create_container', u'on-error': u'create_container_set_status_failed', u'on-success': u'templates_source_check', u'version': u'2.0', u'action': u'tripleo.plan.create_container container=<% $.container %>', u'type': u'direct'}, u'ensure_passwords_exist_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'ensure_passwords_exist_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(ensure_passwords_exist).result %>'}, u'on-success': u'notify_zaqar'}}, u'name': u'create_deployment_plan', u'tags': [u'tripleo-common-managed'], u'version': u'2.0', u'input': [u'container', {u'source_url': None}, {u'queue_name': u'tripleo'}, {u'generate_passwords': True}, {u'use_default_templates': False}], u'description': u'This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\\\n'}, u'params': {u'namespace': u''}, u'input': {u'generate_passwords': True, u'use_default_templates': True, u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'container': u'overcloud', u'source_url': None}, u'id': u'3d2a008d-d243-470f-917b-9c07c49e7c3f'}}}}}']\\n ZaqarAction.queue_post failed: Error response from Zaqar. Code: 500. Title: Internal server error. Description: (pymysql.err.ProgrammingError) (1146, u\\\"Table 'zaqar.Queues' doesn't exist\\\") [SQL: u'SELECT `Queues`.metadata \\\\nFROM `Queues` \\\\nWHERE `Queues`.project = %(project_1)s AND `Queues`.name = %(name_1)s'] [parameters: {u'name_1': '2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'project_1': u'13835fbb8e0947a9b3fa174b9a22cdb9'}] (Background on this error at: http://sqlalche.me/e/f405).\\n [action_ex_id=b844ff2c-b34e-4a84-915d-3592a0f7976b, idx=4]: Failed to run action [action_ex_id=b844ff2c-b34e-4a84-915d-3592a0f7976b, action_cls='<class 'mistral.actions.action_factory.ZaqarAction'>', attributes='{u'client_method_name': u'queue_post'}', params='{u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'SUCCESS', u'message': u'Plan created.', u'execution': {u'name': u'tripleo.plan_management.v1.create_deployment_plan', u'created_at': u'2018-06-26 04:26:49', u'updated_at': u'2018-06-26 04:26:49', u'spec': {u'tasks': {u'templates_source_check': {u'version': u'2.0', u'type': u'direct', u'name': u'templates_source_check', u'on-success': [{u'upload_default_templates': u'<% $.use_default_templates = true %>'}, {u'clone_git_repo': u'<% $.source_url != null %>'}]}, u'create_plan': {u'version': u'2.0', u'type': u'direct', u'name': u'create_plan', u'on-success': [{u'ensure_passwords_exist': u'<% $.generate_passwords = true %>'}, {u'add_root_stack_name': u'<% $.generate_passwords != true %>'}]}, u'upload_templates_directory_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_templates_directory_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_templates_directory).result %>'}, u'on-success': u'notify_zaqar'}, u'ensure_passwords_exist': {u'name': u'ensure_passwords_exist', u'on-error': u'ensure_passwords_exist_set_status_failed', u'on-success': u'add_root_stack_name', u'version': u'2.0', u'action': u'tripleo.parameters.generate_passwords container=<% $.container %>', u'type': u'direct'}, u'process_templates_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'process_templates_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(process_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'container_images_prepare': {u'name': u'container_images_prepare', u'on-error': u'container_images_prepare_set_status_failed', u'on-success': u'process_templates', u'version': u'2.0', u'action': u'tripleo.container_images.prepare container=<% $.container %>', u'type': u'direct', u'description': u'Populate all container image parameters with default values.\\\\n'}, u'process_templates': {u'name': u'process_templates', u'on-error': u'process_templates_set_status_failed', u'on-success': u'set_status_success', u'version': u'2.0', u'action': u'tripleo.templates.process container=<% $.container %>', u'type': u'direct'}, u'container_required_check': {u'on-success': [{u'verify_container_doesnt_exist': u'<% $.use_default_templates or $.source_url %>'}, {u'create_plan': u'<% $.use_default_templates = false and $.source_url = null %>'}], u'version': u'2.0', u'type': u'direct', u'description': u'If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\\\n', u'name': u'container_required_check'}, u'clone_git_repo': {u'name': u'clone_git_repo', u'on-error': u'clone_git_repo_set_status_failed', u'on-success': u'upload_templates_directory', u'version': u'2.0', u'action': u'tripleo.git.clone container=<% $.container %> url=<% $.source_url %>', u'type': u'direct'}, u'add_root_stack_name': {u'name': u'add_root_stack_name', u'on-error': u'notify_zaqar', u'on-success': u'container_images_prepare', u'publish-on-error': {u'status': u'FAILED', u'message': u'<% task().result %>'}, u'version': u'2.0', u'action': u'tripleo.parameters.update', u'input': {u'container': u'<% $.container %>', u'parameters': {u'RootStackName': u'<% $.container %>'}}, u'type': u'direct'}, u'upload_default_templates': {u'name': u'upload_default_templates', u'on-error': u'upload_to_container_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %>', u'type': u'direct'}, u'clone_git_repo_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'clone_git_repo_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(clone_git_repo).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_to_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_to_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_default_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_templates_directory': {u'name': u'upload_templates_directory', u'on-error': u'upload_templates_directory_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>', u'on-complete': u'cleanup_temporary_files', u'type': u'direct'}, u'container_images_prepare_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'container_images_prepare_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(container_images_prepare).result %>'}, u'on-success': u'notify_zaqar'}, u'create_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'create_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(create_container).result %>'}, u'on-success': u'notify_zaqar'}, u'set_status_success': {u'version': u'2.0', u'type': u'direct', u'name': u'set_status_success', u'publish': {u'status': u'SUCCESS', u'message': u'Plan created.'}, u'on-success': u'notify_zaqar'}, u'verify_container_doesnt_exist': {u'name': u'verify_container_doesnt_exist', u'on-error': u'create_container', u'on-success': u'notify_zaqar', u'publish': {u'status': u'FAILED', u'message': u'Unable to create plan. The Swift container already exists'}, u'version': u'2.0', u'action': u'swift.head_container container=<% $.container %>', u'type': u'direct'}, u'notify_zaqar': {u'retry': u'count=5 delay=1', u'name': u'notify_zaqar', u'on-success': [{u'fail': u'<% $.get(\\\\'status\\\\') = \\\"FAILED\\\" %>'}], u'version': u'2.0', u'action': u'zaqar.queue_post', u'input': {u'queue_name': u'<% $.queue_name %>', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'<% $.status %>', u'message': u\\\"<% $.get('message', '') %>\\\", u'execution': u'<% execution() %>'}}}}, u'type': u'direct'}, u'cleanup_temporary_files': {u'action': u'tripleo.git.clean container=<% $.container %>', u'version': u'2.0', u'type': u'direct', u'name': u'cleanup_temporary_files'}, u'create_container': {u'name': u'create_container', u'on-error': u'create_container_set_status_failed', u'on-success': u'templates_source_check', u'version': u'2.0', u'action': u'tripleo.plan.create_container container=<% $.container %>', u'type': u'direct'}, u'ensure_passwords_exist_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'ensure_passwords_exist_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(ensure_passwords_exist).result %>'}, u'on-success': u'notify_zaqar'}}, u'name': u'create_deployment_plan', u'tags': [u'tripleo-common-managed'], u'version': u'2.0', u'input': [u'container', {u'source_url': None}, {u'queue_name': u'tripleo'}, {u'generate_passwords': True}, {u'use_default_templates': False}], u'description': u'This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\\\n'}, u'params': {u'namespace': u''}, u'input': {u'generate_passwords': True, u'use_default_templates': True, u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'container': u'overcloud', u'source_url': None}, u'id': u'3d2a008d-d243-470f-917b-9c07c49e7c3f'}}}}}']\\n ZaqarAction.queue_post failed: Error response from Zaqar. Code: 500. Title: Internal server error. Description: (pymysql.err.ProgrammingError) (1146, u\\\"Table 'zaqar.Queues' doesn't exist\\\") [SQL: u'SELECT `Queues`.metadata \\\\nFROM `Queues` \\\\nWHERE `Queues`.project = %(project_1)s AND `Queues`.name = %(name_1)s'] [parameters: {u'name_1': '2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'project_1': u'13835fbb8e0947a9b3fa174b9a22cdb9'}] (Background on this error at: http://sqlalche.me/e/f405).\\n [action_ex_id=cd52422f-5aec-4880-9286-a3885fbbab1b, idx=5]: Failed to run action [action_ex_id=cd52422f-5aec-4880-9286-a3885fbbab1b, action_cls='<class 'mistral.actions.action_factory.ZaqarAction'>', attributes='{u'client_method_name': u'queue_post'}', params='{u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'SUCCESS', u'message': u'Plan created.', u'execution': {u'name': u'tripleo.plan_management.v1.create_deployment_plan', u'created_at': u'2018-06-26 04:26:49', u'updated_at': u'2018-06-26 04:26:49', u'spec': {u'tasks': {u'templates_source_check': {u'version': u'2.0', u'type': u'direct', u'name': u'templates_source_check', u'on-success': [{u'upload_default_templates': u'<% $.use_default_templates = true %>'}, {u'clone_git_repo': u'<% $.source_url != null %>'}]}, u'create_plan': {u'version': u'2.0', u'type': u'direct', u'name': u'create_plan', u'on-success': [{u'ensure_passwords_exist': u'<% $.generate_passwords = true %>'}, {u'add_root_stack_name': u'<% $.generate_passwords != true %>'}]}, u'upload_templates_directory_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_templates_directory_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_templates_directory).result %>'}, u'on-success': u'notify_zaqar'}, u'ensure_passwords_exist': {u'name': u'ensure_passwords_exist', u'on-error': u'ensure_passwords_exist_set_status_failed', u'on-success': u'add_root_stack_name', u'version': u'2.0', u'action': u'tripleo.parameters.generate_passwords container=<% $.container %>', u'type': u'direct'}, u'process_templates_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'process_templates_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(process_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'container_images_prepare': {u'name': u'container_images_prepare', u'on-error': u'container_images_prepare_set_status_failed', u'on-success': u'process_templates', u'version': u'2.0', u'action': u'tripleo.container_images.prepare container=<% $.container %>', u'type': u'direct', u'description': u'Populate all container image parameters with default values.\\\\n'}, u'process_templates': {u'name': u'process_templates', u'on-error': u'process_templates_set_status_failed', u'on-success': u'set_status_success', u'version': u'2.0', u'action': u'tripleo.templates.process container=<% $.container %>', u'type': u'direct'}, u'container_required_check': {u'on-success': [{u'verify_container_doesnt_exist': u'<% $.use_default_templates or $.source_url %>'}, {u'create_plan': u'<% $.use_default_templates = false and $.source_url = null %>'}], u'version': u'2.0', u'type': u'direct', u'description': u'If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\\\n', u'name': u'container_required_check'}, u'clone_git_repo': {u'name': u'clone_git_repo', u'on-error': u'clone_git_repo_set_status_failed', u'on-success': u'upload_templates_directory', u'version': u'2.0', u'action': u'tripleo.git.clone container=<% $.container %> url=<% $.source_url %>', u'type': u'direct'}, u'add_root_stack_name': {u'name': u'add_root_stack_name', u'on-error': u'notify_zaqar', u'on-success': u'container_images_prepare', u'publish-on-error': {u'status': u'FAILED', u'message': u'<% task().result %>'}, u'version': u'2.0', u'action': u'tripleo.parameters.update', u'input': {u'container': u'<% $.container %>', u'parameters': {u'RootStackName': u'<% $.container %>'}}, u'type': u'direct'}, u'upload_default_templates': {u'name': u'upload_default_templates', u'on-error': u'upload_to_container_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %>', u'type': u'direct'}, u'clone_git_repo_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'clone_git_repo_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(clone_git_repo).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_to_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'upload_to_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(upload_default_templates).result %>'}, u'on-success': u'notify_zaqar'}, u'upload_templates_directory': {u'name': u'upload_templates_directory', u'on-error': u'upload_templates_directory_set_status_failed', u'on-success': u'create_plan', u'version': u'2.0', u'action': u'tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>', u'on-complete': u'cleanup_temporary_files', u'type': u'direct'}, u'container_images_prepare_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'container_images_prepare_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(container_images_prepare).result %>'}, u'on-success': u'notify_zaqar'}, u'create_container_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'create_container_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(create_container).result %>'}, u'on-success': u'notify_zaqar'}, u'set_status_success': {u'version': u'2.0', u'type': u'direct', u'name': u'set_status_success', u'publish': {u'status': u'SUCCESS', u'message': u'Plan created.'}, u'on-success': u'notify_zaqar'}, u'verify_container_doesnt_exist': {u'name': u'verify_container_doesnt_exist', u'on-error': u'create_container', u'on-success': u'notify_zaqar', u'publish': {u'status': u'FAILED', u'message': u'Unable to create plan. The Swift container already exists'}, u'version': u'2.0', u'action': u'swift.head_container container=<% $.container %>', u'type': u'direct'}, u'notify_zaqar': {u'retry': u'count=5 delay=1', u'name': u'notify_zaqar', u'on-success': [{u'fail': u'<% $.get(\\\\'status\\\\') = \\\"FAILED\\\" %>'}], u'version': u'2.0', u'action': u'zaqar.queue_post', u'input': {u'queue_name': u'<% $.queue_name %>', u'messages': {u'body': {u'type': u'tripleo.plan_management.v1.create_deployment_plan', u'payload': {u'status': u'<% $.status %>', u'message': u\\\"<% $.get('message', '') %>\\\", u'execution': u'<% execution() %>'}}}}, u'type': u'direct'}, u'cleanup_temporary_files': {u'action': u'tripleo.git.clean container=<% $.container %>', u'version': u'2.0', u'type': u'direct', u'name': u'cleanup_temporary_files'}, u'create_container': {u'name': u'create_container', u'on-error': u'create_container_set_status_failed', u'on-success': u'templates_source_check', u'version': u'2.0', u'action': u'tripleo.plan.create_container container=<% $.container %>', u'type': u'direct'}, u'ensure_passwords_exist_set_status_failed': {u'version': u'2.0', u'type': u'direct', u'name': u'ensure_passwords_exist_set_status_failed', u'publish': {u'status': u'FAILED', u'message': u'<% task(ensure_passwords_exist).result %>'}, u'on-success': u'notify_zaqar'}}, u'name': u'create_deployment_plan', u'tags': [u'tripleo-common-managed'], u'version': u'2.0', u'input': [u'container', {u'source_url': None}, {u'queue_name': u'tripleo'}, {u'generate_passwords': True}, {u'use_default_templates': False}], u'description': u'This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\\\n'}, u'params': {u'namespace': u''}, u'input': {u'generate_passwords': True, u'use_default_templates': True, u'queue_name': u'2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'container': u'overcloud', u'source_url': None}, u'id': u'3d2a008d-d243-470f-917b-9c07c49e7c3f'}}}}}']\\n ZaqarAction.queue_post failed: Error response from Zaqar. Code: 500. Title: Internal server error. Description: (pymysql.err.ProgrammingError) (1146, u\\\"Table 'zaqar.Queues' doesn't exist\\\") [SQL: u'SELECT `Queues`.metadata \\\\nFROM `Queues` \\\\nWHERE `Queues`.project = %(project_1)s AND `Queues`.name = %(name_1)s'] [parameters: {u'name_1': '2ccee8b6-1fb0-4139-b49a-fcd230d8af95', u'project_1': u'13835fbb8e0947a9b3fa174b9a22cdb9'}] (Background on this error at: http://sqlalche.me/e/f405).\\n\"}", "input": "{\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}", "created_at": "2018-06-26 04:26:49", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "3d2a008d-d243-470f-917b-9c07c49e7c3f"} > >2018-06-26 09:57:54,381 DEBUG: HTTP GET http://192.0.3.1:8989/v2/executions/3d2a008d-d243-470f-917b-9c07c49e7c3f 200 >2018-06-26 09:57:54,385 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/action_executions -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}0fceabf054a32365c99aa3d148232b7b0baa57cf" >2018-06-26 09:57:54,422 DEBUG: http://192.0.3.1:8989 "GET /v2/action_executions HTTP/1.1" 200 56954 >2018-06-26 09:57:54,463 DEBUG: RESP: [200] Content-Length: 56954 Content-Type: application/json Date: Tue, 26 Jun 2018 04:27:54 GMT Connection: keep-alive >RESP BODY: {"action_executions": [{"state_info": null, "created_at": "2018-06-26 04:26:49", "accepted": true, "name": "std.noop", "tags": null, "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": "f6a551bc-84bc-449f-87f5-8153d9d0552e", "updated_at": "2018-06-26 04:26:49", "state": "SUCCESS", "workflow_namespace": "", "task_name": "container_required_check", "input": "{}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "68d8f0f2-e7da-49d5-a69b-c7b2f82642b0", "description": ""}, {"state_info": null, "created_at": "2018-06-26 04:26:49", "accepted": true, "name": "tripleo.templates.upload", "tags": null, "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": "333fd966-a370-4bae-a376-7c8d964f8341", "updated_at": "2018-06-26 04:26:57", "state": "SUCCESS", "workflow_namespace": "", "task_name": "upload_default_templates", "input": "{\"container\": \"overcloud\"}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "bae6202c-d7ba-459d-8066-261d55f80df8", "description": ""}, {"state_info": null, "created_at": "2018-06-26 04:26:49", "accepted": true, "name": "std.noop", "tags": null, "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": "090c91f6-5af0-4a68-80e1-c0f66b584656", "updated_at": "2018-06-26 04:26:49", "state": "SUCCESS", "workflow_namespace": "", "task_name": "templates_source_check", "input": "{}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "cacc7a85-ed50-4421-9c96-c29d57641fef", "description": ""}, {"state_info": null, "created_at": "2018-06-26 04:26:49", "accepted": true, "name": "tripleo.plan.create_container", "tags": null, "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": "30593ea1-7a5c-4883-8ef0-26131183109c", "updated_at": "2018-06-26 04:26:49", "state": "SUCCESS", "workflow_namespace": "", "task_name": "create_container", "input": "{\"container\": \"overcloud\"}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "d4e5adfc-d51e-4bb0-9ae9-2db62c46e5d4", "description": ""}, {"state_info": null, "created_at": "2018-06-26 04:26:49", "accepted": true, "name": "swift.head_container", "tags": null, "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": "59cb4e87-7629-43c3-9d67-e1e93872881b", "updated_at": "2018-06-26 04:26:49", "state": "ERROR", "workflow_namespace": "", "task_name": "verify_container_doesnt_exist", "input": "{\"container\": \"overcloud\"}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "f0863712-bd2b-459d-b3f2-e8880e43c3d6", "description": ""}, {"state_info": null, "created_at": "2018-06-26 04:26:57", "accepted": true, "name": "std.noop", "tags": null, "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": "979ce851-0086-4e6f-9837-ce783254c94c", "updated_at": "2018-06-26 04:26:58", "state": "SUCCESS", "workflow_namespace": "", "task_name": "create_plan", "input": "{}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "3a17fce2-c91b-480c-9863-8b3620691585", "description": ""}, {"state_info": null, "created_at": "2018-06-26 04:26:58", "accepted": true, "name": "tripleo.parameters.generate_passwords", "tags": null, "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": "f9235812-f983-4edf-9e06-6185fab6d5ab", "updated_at": "2018-06-26 04:27:00", "state": "SUCCESS", "workflow_namespace": "", "task_name": "ensure_passwords_exist", "input": "{\"container\": \"overcloud\"}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "d511b94c-d4c9-49a2-ac55-1b235fbd85d6", "description": ""}, {"state_info": null, "created_at": "2018-06-26 04:27:00", "accepted": true, "name": "tripleo.parameters.update", "tags": null, "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": "a67ead67-50bb-4c56-ad9f-8fcc512df9d1", "updated_at": "2018-06-26 04:27:22", "state": "SUCCESS", "workflow_namespace": "", "task_name": "add_root_stack_name", "input": "{\"container\": \"overcloud\", \"parameters\": {\"RootStackName\": \"overcloud\"}}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "71338c54-fbb6-45f1-8e1e-590ecc203aa7", "description": ""}, {"state_info": null, "created_at": "2018-06-26 04:27:22", "accepted": true, "name": "tripleo.container_images.prepare", "tags": null, "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": "29412236-18b5-4e14-b7e9-4a888abddf42", "updated_at": "2018-06-26 04:27:23", "state": "SUCCESS", "workflow_namespace": "", "task_name": "container_images_prepare", "input": "{\"container\": \"overcloud\"}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "e42c73f1-c31b-4167-bf90-0f9200ed96c8", "description": ""}, {"state_info": null, "created_at": "2018-06-26 04:27:23", "accepted": true, "name": "tripleo.templates.process", "tags": null, "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": "c8dd6d38-b7eb-4625-84fa-b886a056a670", "updated_at": "2018-06-26 04:27:38", "state": "SUCCESS", "workflow_namespace": "", "task_name": "process_templates", "input": "{\"container\": \"overcloud\"}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "fe06250f-4200-485a-845c-b24628eb6d0f", "description": ""}, {"state_info": null, "created_at": "2018-06-26 04:27:38", "accepted": false, "name": "zaqar.queue_post", "tags": null, "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": "1a2b25c9-3deb-40dc-90de-d8d2dd80e1ba", "updated_at": "2018-06-26 04:27:40", "state": "ERROR", "workflow_namespace": "", "task_name": "notify_zaqar", "input": "{\"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"messages\": {\"body\": {\"type\": \"tripleo.plan_management.v1.create_deployment_plan\", \"payload\": {\"status\": \"SUCCESS\", \"message\": \"Plan created.\", \"execution\": {\"name\": \"tripleo.plan_management.v1.create_deployment_plan\", \"created_at\": \"2018-06-26 04:26:49\", \"updated_at\": \"2018-06-26 04:26:49\", \"id\": \"3d2a008d-d243-470f-917b-9c07c49e7c3f\", \"params\": {\"namespace\": \"\"}, \"input\": {\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}, \"spec\": {\"tasks\": {\"templates_source_check\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"templates_source_check\", \"on-success\": [{\"upload_default_templates\": \"<% $.use_default_templates = true %>\"}, {\"clone_git_repo\": \"<% $.source_url != null %>\"}]}, \"create_plan\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"create_plan\", \"on-success\": [{\"ensure_passwords_exist\": \"<% $.generate_passwords = true %>\"}, {\"add_root_stack_name\": \"<% $.generate_passwords != true %>\"}]}, \"upload_templates_directory_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"upload_templates_directory_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(upload_templates_directory).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"ensure_passwords_exist\": {\"name\": \"ensure_passwords_exist\", \"on-error\": \"ensure_passwords_exist_set_status_failed\", \"on-success\": \"add_root_stack_name\", \"version\": \"2.0\", \"action\": \"tripleo.parameters.generate_passwords container=<% $.container %>\", \"type\": \"direct\"}, \"process_templates_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"process_templates_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(process_templates).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"add_root_stack_name\": {\"name\": \"add_root_stack_name\", \"on-error\": \"notify_zaqar\", \"on-success\": \"container_images_prepare\", \"publish-on-error\": {\"status\": \"FAILED\", \"message\": \"<% task().result %>\"}, \"version\": \"2.0\", \"action\": \"tripleo.parameters.update\", \"input\": {\"container\": \"<% $.container %>\", \"parameters\": {\"RootStackName\": \"<% $.container %>\"}}, \"type\": \"direct\"}, \"process_templates\": {\"name\": \"process_templates\", \"on-error\": \"process_templates_set_status_failed\", \"on-success\": \"set_status_success\", \"version\": \"2.0\", \"action\": \"tripleo.templates.process container=<% $.container %>\", \"type\": \"direct\"}, \"container_required_check\": {\"version\": \"2.0\", \"type\": \"direct\", \"description\": \"If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\n\", \"name\": \"container_required_check\", \"on-success\": [{\"verify_container_doesnt_exist\": \"<% $.use_default_templates or $.source_url %>\"}, {\"create_plan\": \"<% $.use_default_templates = false and $.source_url = null %>\"}]}, \"clone_git_repo\": {\"name\": \"clone_git_repo\", \"on-error\": \"clone_git_repo_set_status_failed\", \"on-success\": \"upload_templates_directory\", \"version\": \"2.0\", \"action\": \"tripleo.git.clone container=<% $.container %> url=<% $.source_url %>\", \"type\": \"direct\"}, \"container_images_prepare\": {\"name\": \"container_images_prepare\", \"on-error\": \"container_images_prepare_set_status_failed\", \"on-success\": \"process_templates\", \"version\": \"2.0\", \"action\": \"tripleo.container_images.prepare container=<% $.container %>\", \"type\": \"direct\", \"description\": \"Populate all container image parameters with default values.\\n\"}, \"upload_default_templates\": {\"name\": \"upload_default_templates\", \"on-error\": \"upload_to_container_set_status_failed\", \"on-success\": \"create_plan\", \"version\": \"2.0\", \"action\": \"tripleo.templates.upload container=<% $.container %>\", \"type\": \"direct\"}, \"clone_git_repo_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"clone_git_repo_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(clone_git_repo).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"upload_to_container_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"upload_to_container_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(upload_default_templates).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"upload_templates_directory\": {\"name\": \"upload_templates_directory\", \"on-error\": \"upload_templates_directory_set_status_failed\", \"on-success\": \"create_plan\", \"version\": \"2.0\", \"action\": \"tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>\", \"on-complete\": \"cleanup_temporary_files\", \"type\": \"direct\"}, \"container_images_prepare_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"container_images_prepare_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(container_images_prepare).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"create_container_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"create_container_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(create_container).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"set_status_success\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"set_status_success\", \"publish\": {\"status\": \"SUCCESS\", \"message\": \"Plan created.\"}, \"on-success\": \"notify_zaqar\"}, \"verify_container_doesnt_exist\": {\"name\": \"verify_container_doesnt_exist\", \"on-error\": \"create_container\", \"on-success\": \"notify_zaqar\", \"publish\": {\"status\": \"FAILED\", \"message\": \"Unable to create plan. The Swift container already exists\"}, \"version\": \"2.0\", \"action\": \"swift.head_container container=<% $.container %>\", \"type\": \"direct\"}, \"notify_zaqar\": {\"retry\": \"count=5 delay=1\", \"name\": \"notify_zaqar\", \"on-success\": [{\"fail\": \"<% $.get('status') = \\\"FAILED\\\" %>\"}], \"version\": \"2.0\", \"action\": \"zaqar.queue_post\", \"input\": {\"queue_name\": \"<% $.queue_name %>\", \"messages\": {\"body\": {\"type\": \"tripleo.plan_management.v1.create_deployment_plan\", \"payload\": {\"status\": \"<% $.status %>\", \"message\": \"<% $.get('message', '') %>\", \"execution\": \"<% execution() %>\"}}}}, \"type\": \"direct\"}, \"cleanup_temporary_files\": {\"action\": \"tripleo.git.clean container=<% $.container %>\", \"version\": \"2.0\", \"type\": \"direct\", \"name\": \"cleanup_temporary_files\"}, \"create_container\": {\"name\": \"create_container\", \"on-error\": \"create_container_set_status_failed\", \"on-success\": \"templates_source_check\", \"version\": \"2.0\", \"action\": \"tripleo.plan.create_container container=<% $.container %>\", \"type\": \"direct\"}, \"ensure_passwords_exist_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"ensure_passwords_exist_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(ensure_passwords_exist).result %>\"}, \"on-success\": \"notify_zaqar\"}}, \"description\": \"This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\n\", \"tags\": [\"tripleo-common-managed\"], \"version\": \"2.0\", \"input\": [\"container\", {\"source_url\": null}, {\"queue_name\": \"tripleo\"}, {\"generate_passwords\": true}, {\"use_default_templates\": false}], \"name\": \"create_deployment_plan\"}}}}}}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "5331a8c3-aca1-4e37-a559-f93896d0fe2a", "description": ""}, {"state_info": null, "created_at": "2018-06-26 04:27:38", "accepted": true, "name": "std.noop", "tags": null, "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": "d863c5df-ab91-42f4-b7d4-bc5ca0fa3e71", "updated_at": "2018-06-26 04:27:38", "state": "SUCCESS", "workflow_namespace": "", "task_name": "set_status_success", "input": "{}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "bbdac6f5-a816-4dc1-b511-3d3ae083e7a4", "description": ""}, {"state_info": null, "created_at": "2018-06-26 04:27:40", "accepted": false, "name": "zaqar.queue_post", "tags": null, "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": "1a2b25c9-3deb-40dc-90de-d8d2dd80e1ba", "updated_at": "2018-06-26 04:27:42", "state": "ERROR", "workflow_namespace": "", "task_name": "notify_zaqar", "input": "{\"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"messages\": {\"body\": {\"type\": \"tripleo.plan_management.v1.create_deployment_plan\", \"payload\": {\"status\": \"SUCCESS\", \"message\": \"Plan created.\", \"execution\": {\"name\": \"tripleo.plan_management.v1.create_deployment_plan\", \"created_at\": \"2018-06-26 04:26:49\", \"updated_at\": \"2018-06-26 04:26:49\", \"id\": \"3d2a008d-d243-470f-917b-9c07c49e7c3f\", \"params\": {\"namespace\": \"\"}, \"input\": {\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}, \"spec\": {\"tasks\": {\"templates_source_check\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"templates_source_check\", \"on-success\": [{\"upload_default_templates\": \"<% $.use_default_templates = true %>\"}, {\"clone_git_repo\": \"<% $.source_url != null %>\"}]}, \"create_plan\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"create_plan\", \"on-success\": [{\"ensure_passwords_exist\": \"<% $.generate_passwords = true %>\"}, {\"add_root_stack_name\": \"<% $.generate_passwords != true %>\"}]}, \"upload_templates_directory_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"upload_templates_directory_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(upload_templates_directory).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"ensure_passwords_exist\": {\"name\": \"ensure_passwords_exist\", \"on-error\": \"ensure_passwords_exist_set_status_failed\", \"on-success\": \"add_root_stack_name\", \"version\": \"2.0\", \"action\": \"tripleo.parameters.generate_passwords container=<% $.container %>\", \"type\": \"direct\"}, \"process_templates_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"process_templates_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(process_templates).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"add_root_stack_name\": {\"name\": \"add_root_stack_name\", \"on-error\": \"notify_zaqar\", \"on-success\": \"container_images_prepare\", \"publish-on-error\": {\"status\": \"FAILED\", \"message\": \"<% task().result %>\"}, \"version\": \"2.0\", \"action\": \"tripleo.parameters.update\", \"input\": {\"container\": \"<% $.container %>\", \"parameters\": {\"RootStackName\": \"<% $.container %>\"}}, \"type\": \"direct\"}, \"process_templates\": {\"name\": \"process_templates\", \"on-error\": \"process_templates_set_status_failed\", \"on-success\": \"set_status_success\", \"version\": \"2.0\", \"action\": \"tripleo.templates.process container=<% $.container %>\", \"type\": \"direct\"}, \"container_required_check\": {\"version\": \"2.0\", \"type\": \"direct\", \"description\": \"If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\n\", \"name\": \"container_required_check\", \"on-success\": [{\"verify_container_doesnt_exist\": \"<% $.use_default_templates or $.source_url %>\"}, {\"create_plan\": \"<% $.use_default_templates = false and $.source_url = null %>\"}]}, \"clone_git_repo\": {\"name\": \"clone_git_repo\", \"on-error\": \"clone_git_repo_set_status_failed\", \"on-success\": \"upload_templates_directory\", \"version\": \"2.0\", \"action\": \"tripleo.git.clone container=<% $.container %> url=<% $.source_url %>\", \"type\": \"direct\"}, \"container_images_prepare\": {\"name\": \"container_images_prepare\", \"on-error\": \"container_images_prepare_set_status_failed\", \"on-success\": \"process_templates\", \"version\": \"2.0\", \"action\": \"tripleo.container_images.prepare container=<% $.container %>\", \"type\": \"direct\", \"description\": \"Populate all container image parameters with default values.\\n\"}, \"upload_default_templates\": {\"name\": \"upload_default_templates\", \"on-error\": \"upload_to_container_set_status_failed\", \"on-success\": \"create_plan\", \"version\": \"2.0\", \"action\": \"tripleo.templates.upload container=<% $.container %>\", \"type\": \"direct\"}, \"clone_git_repo_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"clone_git_repo_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(clone_git_repo).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"upload_to_container_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"upload_to_container_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(upload_default_templates).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"upload_templates_directory\": {\"name\": \"upload_templates_directory\", \"on-error\": \"upload_templates_directory_set_status_failed\", \"on-success\": \"create_plan\", \"version\": \"2.0\", \"action\": \"tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>\", \"on-complete\": \"cleanup_temporary_files\", \"type\": \"direct\"}, \"container_images_prepare_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"container_images_prepare_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(container_images_prepare).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"create_container_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"create_container_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(create_container).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"set_status_success\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"set_status_success\", \"publish\": {\"status\": \"SUCCESS\", \"message\": \"Plan created.\"}, \"on-success\": \"notify_zaqar\"}, \"verify_container_doesnt_exist\": {\"name\": \"verify_container_doesnt_exist\", \"on-error\": \"create_container\", \"on-success\": \"notify_zaqar\", \"publish\": {\"status\": \"FAILED\", \"message\": \"Unable to create plan. The Swift container already exists\"}, \"version\": \"2.0\", \"action\": \"swift.head_container container=<% $.container %>\", \"type\": \"direct\"}, \"notify_zaqar\": {\"retry\": \"count=5 delay=1\", \"name\": \"notify_zaqar\", \"on-success\": [{\"fail\": \"<% $.get('status') = \\\"FAILED\\\" %>\"}], \"version\": \"2.0\", \"action\": \"zaqar.queue_post\", \"input\": {\"queue_name\": \"<% $.queue_name %>\", \"messages\": {\"body\": {\"type\": \"tripleo.plan_management.v1.create_deployment_plan\", \"payload\": {\"status\": \"<% $.status %>\", \"message\": \"<% $.get('message', '') %>\", \"execution\": \"<% execution() %>\"}}}}, \"type\": \"direct\"}, \"cleanup_temporary_files\": {\"action\": \"tripleo.git.clean container=<% $.container %>\", \"version\": \"2.0\", \"type\": \"direct\", \"name\": \"cleanup_temporary_files\"}, \"create_container\": {\"name\": \"create_container\", \"on-error\": \"create_container_set_status_failed\", \"on-success\": \"templates_source_check\", \"version\": \"2.0\", \"action\": \"tripleo.plan.create_container container=<% $.container %>\", \"type\": \"direct\"}, \"ensure_passwords_exist_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"ensure_passwords_exist_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(ensure_passwords_exist).result %>\"}, \"on-success\": \"notify_zaqar\"}}, \"description\": \"This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\n\", \"tags\": [\"tripleo-common-managed\"], \"version\": \"2.0\", \"input\": [\"container\", {\"source_url\": null}, {\"queue_name\": \"tripleo\"}, {\"generate_passwords\": true}, {\"use_default_templates\": false}], \"name\": \"create_deployment_plan\"}}}}}}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "6acf6f9e-680b-4e4d-acc2-d905440e1822", "description": ""}, {"state_info": null, "created_at": "2018-06-26 04:27:42", "accepted": false, "name": "zaqar.queue_post", "tags": null, "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": "1a2b25c9-3deb-40dc-90de-d8d2dd80e1ba", "updated_at": "2018-06-26 04:27:44", "state": "ERROR", "workflow_namespace": "", "task_name": "notify_zaqar", "input": "{\"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"messages\": {\"body\": {\"type\": \"tripleo.plan_management.v1.create_deployment_plan\", \"payload\": {\"status\": \"SUCCESS\", \"message\": \"Plan created.\", \"execution\": {\"name\": \"tripleo.plan_management.v1.create_deployment_plan\", \"created_at\": \"2018-06-26 04:26:49\", \"updated_at\": \"2018-06-26 04:26:49\", \"id\": \"3d2a008d-d243-470f-917b-9c07c49e7c3f\", \"params\": {\"namespace\": \"\"}, \"input\": {\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}, \"spec\": {\"tasks\": {\"templates_source_check\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"templates_source_check\", \"on-success\": [{\"upload_default_templates\": \"<% $.use_default_templates = true %>\"}, {\"clone_git_repo\": \"<% $.source_url != null %>\"}]}, \"create_plan\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"create_plan\", \"on-success\": [{\"ensure_passwords_exist\": \"<% $.generate_passwords = true %>\"}, {\"add_root_stack_name\": \"<% $.generate_passwords != true %>\"}]}, \"upload_templates_directory_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"upload_templates_directory_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(upload_templates_directory).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"ensure_passwords_exist\": {\"name\": \"ensure_passwords_exist\", \"on-error\": \"ensure_passwords_exist_set_status_failed\", \"on-success\": \"add_root_stack_name\", \"version\": \"2.0\", \"action\": \"tripleo.parameters.generate_passwords container=<% $.container %>\", \"type\": \"direct\"}, \"process_templates_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"process_templates_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(process_templates).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"add_root_stack_name\": {\"name\": \"add_root_stack_name\", \"on-error\": \"notify_zaqar\", \"on-success\": \"container_images_prepare\", \"publish-on-error\": {\"status\": \"FAILED\", \"message\": \"<% task().result %>\"}, \"version\": \"2.0\", \"action\": \"tripleo.parameters.update\", \"input\": {\"container\": \"<% $.container %>\", \"parameters\": {\"RootStackName\": \"<% $.container %>\"}}, \"type\": \"direct\"}, \"process_templates\": {\"name\": \"process_templates\", \"on-error\": \"process_templates_set_status_failed\", \"on-success\": \"set_status_success\", \"version\": \"2.0\", \"action\": \"tripleo.templates.process container=<% $.container %>\", \"type\": \"direct\"}, \"container_required_check\": {\"version\": \"2.0\", \"type\": \"direct\", \"description\": \"If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\n\", \"name\": \"container_required_check\", \"on-success\": [{\"verify_container_doesnt_exist\": \"<% $.use_default_templates or $.source_url %>\"}, {\"create_plan\": \"<% $.use_default_templates = false and $.source_url = null %>\"}]}, \"clone_git_repo\": {\"name\": \"clone_git_repo\", \"on-error\": \"clone_git_repo_set_status_failed\", \"on-success\": \"upload_templates_directory\", \"version\": \"2.0\", \"action\": \"tripleo.git.clone container=<% $.container %> url=<% $.source_url %>\", \"type\": \"direct\"}, \"container_images_prepare\": {\"name\": \"container_images_prepare\", \"on-error\": \"container_images_prepare_set_status_failed\", \"on-success\": \"process_templates\", \"version\": \"2.0\", \"action\": \"tripleo.container_images.prepare container=<% $.container %>\", \"type\": \"direct\", \"description\": \"Populate all container image parameters with default values.\\n\"}, \"upload_default_templates\": {\"name\": \"upload_default_templates\", \"on-error\": \"upload_to_container_set_status_failed\", \"on-success\": \"create_plan\", \"version\": \"2.0\", \"action\": \"tripleo.templates.upload container=<% $.container %>\", \"type\": \"direct\"}, \"clone_git_repo_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"clone_git_repo_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(clone_git_repo).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"upload_to_container_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"upload_to_container_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(upload_default_templates).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"upload_templates_directory\": {\"name\": \"upload_templates_directory\", \"on-error\": \"upload_templates_directory_set_status_failed\", \"on-success\": \"create_plan\", \"version\": \"2.0\", \"action\": \"tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>\", \"on-complete\": \"cleanup_temporary_files\", \"type\": \"direct\"}, \"container_images_prepare_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"container_images_prepare_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(container_images_prepare).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"create_container_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"create_container_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(create_container).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"set_status_success\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"set_status_success\", \"publish\": {\"status\": \"SUCCESS\", \"message\": \"Plan created.\"}, \"on-success\": \"notify_zaqar\"}, \"verify_container_doesnt_exist\": {\"name\": \"verify_container_doesnt_exist\", \"on-error\": \"create_container\", \"on-success\": \"notify_zaqar\", \"publish\": {\"status\": \"FAILED\", \"message\": \"Unable to create plan. The Swift container already exists\"}, \"version\": \"2.0\", \"action\": \"swift.head_container container=<% $.container %>\", \"type\": \"direct\"}, \"notify_zaqar\": {\"retry\": \"count=5 delay=1\", \"name\": \"notify_zaqar\", \"on-success\": [{\"fail\": \"<% $.get('status') = \\\"FAILED\\\" %>\"}], \"version\": \"2.0\", \"action\": \"zaqar.queue_post\", \"input\": {\"queue_name\": \"<% $.queue_name %>\", \"messages\": {\"body\": {\"type\": \"tripleo.plan_management.v1.create_deployment_plan\", \"payload\": {\"status\": \"<% $.status %>\", \"message\": \"<% $.get('message', '') %>\", \"execution\": \"<% execution() %>\"}}}}, \"type\": \"direct\"}, \"cleanup_temporary_files\": {\"action\": \"tripleo.git.clean container=<% $.container %>\", \"version\": \"2.0\", \"type\": \"direct\", \"name\": \"cleanup_temporary_files\"}, \"create_container\": {\"name\": \"create_container\", \"on-error\": \"create_container_set_status_failed\", \"on-success\": \"templates_source_check\", \"version\": \"2.0\", \"action\": \"tripleo.plan.create_container container=<% $.container %>\", \"type\": \"direct\"}, \"ensure_passwords_exist_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"ensure_passwords_exist_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(ensure_passwords_exist).result %>\"}, \"on-success\": \"notify_zaqar\"}}, \"description\": \"This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\n\", \"tags\": [\"tripleo-common-managed\"], \"version\": \"2.0\", \"input\": [\"container\", {\"source_url\": null}, {\"queue_name\": \"tripleo\"}, {\"generate_passwords\": true}, {\"use_default_templates\": false}], \"name\": \"create_deployment_plan\"}}}}}}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "62fe26ef-36b6-4b29-ae9e-47f56efad77b", "description": ""}, {"state_info": null, "created_at": "2018-06-26 04:27:44", "accepted": false, "name": "zaqar.queue_post", "tags": null, "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": "1a2b25c9-3deb-40dc-90de-d8d2dd80e1ba", "updated_at": "2018-06-26 04:27:46", "state": "ERROR", "workflow_namespace": "", "task_name": "notify_zaqar", "input": "{\"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"messages\": {\"body\": {\"type\": \"tripleo.plan_management.v1.create_deployment_plan\", \"payload\": {\"status\": \"SUCCESS\", \"message\": \"Plan created.\", \"execution\": {\"name\": \"tripleo.plan_management.v1.create_deployment_plan\", \"created_at\": \"2018-06-26 04:26:49\", \"updated_at\": \"2018-06-26 04:26:49\", \"id\": \"3d2a008d-d243-470f-917b-9c07c49e7c3f\", \"params\": {\"namespace\": \"\"}, \"input\": {\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}, \"spec\": {\"tasks\": {\"templates_source_check\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"templates_source_check\", \"on-success\": [{\"upload_default_templates\": \"<% $.use_default_templates = true %>\"}, {\"clone_git_repo\": \"<% $.source_url != null %>\"}]}, \"create_plan\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"create_plan\", \"on-success\": [{\"ensure_passwords_exist\": \"<% $.generate_passwords = true %>\"}, {\"add_root_stack_name\": \"<% $.generate_passwords != true %>\"}]}, \"upload_templates_directory_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"upload_templates_directory_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(upload_templates_directory).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"ensure_passwords_exist\": {\"name\": \"ensure_passwords_exist\", \"on-error\": \"ensure_passwords_exist_set_status_failed\", \"on-success\": \"add_root_stack_name\", \"version\": \"2.0\", \"action\": \"tripleo.parameters.generate_passwords container=<% $.container %>\", \"type\": \"direct\"}, \"process_templates_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"process_templates_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(process_templates).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"add_root_stack_name\": {\"name\": \"add_root_stack_name\", \"on-error\": \"notify_zaqar\", \"on-success\": \"container_images_prepare\", \"publish-on-error\": {\"status\": \"FAILED\", \"message\": \"<% task().result %>\"}, \"version\": \"2.0\", \"action\": \"tripleo.parameters.update\", \"input\": {\"container\": \"<% $.container %>\", \"parameters\": {\"RootStackName\": \"<% $.container %>\"}}, \"type\": \"direct\"}, \"process_templates\": {\"name\": \"process_templates\", \"on-error\": \"process_templates_set_status_failed\", \"on-success\": \"set_status_success\", \"version\": \"2.0\", \"action\": \"tripleo.templates.process container=<% $.container %>\", \"type\": \"direct\"}, \"container_required_check\": {\"version\": \"2.0\", \"type\": \"direct\", \"description\": \"If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\n\", \"name\": \"container_required_check\", \"on-success\": [{\"verify_container_doesnt_exist\": \"<% $.use_default_templates or $.source_url %>\"}, {\"create_plan\": \"<% $.use_default_templates = false and $.source_url = null %>\"}]}, \"clone_git_repo\": {\"name\": \"clone_git_repo\", \"on-error\": \"clone_git_repo_set_status_failed\", \"on-success\": \"upload_templates_directory\", \"version\": \"2.0\", \"action\": \"tripleo.git.clone container=<% $.container %> url=<% $.source_url %>\", \"type\": \"direct\"}, \"container_images_prepare\": {\"name\": \"container_images_prepare\", \"on-error\": \"container_images_prepare_set_status_failed\", \"on-success\": \"process_templates\", \"version\": \"2.0\", \"action\": \"tripleo.container_images.prepare container=<% $.container %>\", \"type\": \"direct\", \"description\": \"Populate all container image parameters with default values.\\n\"}, \"upload_default_templates\": {\"name\": \"upload_default_templates\", \"on-error\": \"upload_to_container_set_status_failed\", \"on-success\": \"create_plan\", \"version\": \"2.0\", \"action\": \"tripleo.templates.upload container=<% $.container %>\", \"type\": \"direct\"}, \"clone_git_repo_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"clone_git_repo_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(clone_git_repo).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"upload_to_container_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"upload_to_container_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(upload_default_templates).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"upload_templates_directory\": {\"name\": \"upload_templates_directory\", \"on-error\": \"upload_templates_directory_set_status_failed\", \"on-success\": \"create_plan\", \"version\": \"2.0\", \"action\": \"tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>\", \"on-complete\": \"cleanup_temporary_files\", \"type\": \"direct\"}, \"container_images_prepare_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"container_images_prepare_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(container_images_prepare).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"create_container_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"create_container_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(create_container).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"set_status_success\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"set_status_success\", \"publish\": {\"status\": \"SUCCESS\", \"message\": \"Plan created.\"}, \"on-success\": \"notify_zaqar\"}, \"verify_container_doesnt_exist\": {\"name\": \"verify_container_doesnt_exist\", \"on-error\": \"create_container\", \"on-success\": \"notify_zaqar\", \"publish\": {\"status\": \"FAILED\", \"message\": \"Unable to create plan. The Swift container already exists\"}, \"version\": \"2.0\", \"action\": \"swift.head_container container=<% $.container %>\", \"type\": \"direct\"}, \"notify_zaqar\": {\"retry\": \"count=5 delay=1\", \"name\": \"notify_zaqar\", \"on-success\": [{\"fail\": \"<% $.get('status') = \\\"FAILED\\\" %>\"}], \"version\": \"2.0\", \"action\": \"zaqar.queue_post\", \"input\": {\"queue_name\": \"<% $.queue_name %>\", \"messages\": {\"body\": {\"type\": \"tripleo.plan_management.v1.create_deployment_plan\", \"payload\": {\"status\": \"<% $.status %>\", \"message\": \"<% $.get('message', '') %>\", \"execution\": \"<% execution() %>\"}}}}, \"type\": \"direct\"}, \"cleanup_temporary_files\": {\"action\": \"tripleo.git.clean container=<% $.container %>\", \"version\": \"2.0\", \"type\": \"direct\", \"name\": \"cleanup_temporary_files\"}, \"create_container\": {\"name\": \"create_container\", \"on-error\": \"create_container_set_status_failed\", \"on-success\": \"templates_source_check\", \"version\": \"2.0\", \"action\": \"tripleo.plan.create_container container=<% $.container %>\", \"type\": \"direct\"}, \"ensure_passwords_exist_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"ensure_passwords_exist_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(ensure_passwords_exist).result %>\"}, \"on-success\": \"notify_zaqar\"}}, \"description\": \"This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\n\", \"tags\": [\"tripleo-common-managed\"], \"version\": \"2.0\", \"input\": [\"container\", {\"source_url\": null}, {\"queue_name\": \"tripleo\"}, {\"generate_passwords\": true}, {\"use_default_templates\": false}], \"name\": \"create_deployment_plan\"}}}}}}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "44598c23-bf24-41d7-aa2a-a4e8dcc4026e", "description": ""}, {"state_info": null, "created_at": "2018-06-26 04:27:47", "accepted": false, "name": "zaqar.queue_post", "tags": null, "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": "1a2b25c9-3deb-40dc-90de-d8d2dd80e1ba", "updated_at": "2018-06-26 04:27:48", "state": "ERROR", "workflow_namespace": "", "task_name": "notify_zaqar", "input": "{\"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"messages\": {\"body\": {\"type\": \"tripleo.plan_management.v1.create_deployment_plan\", \"payload\": {\"status\": \"SUCCESS\", \"message\": \"Plan created.\", \"execution\": {\"name\": \"tripleo.plan_management.v1.create_deployment_plan\", \"created_at\": \"2018-06-26 04:26:49\", \"updated_at\": \"2018-06-26 04:26:49\", \"id\": \"3d2a008d-d243-470f-917b-9c07c49e7c3f\", \"params\": {\"namespace\": \"\"}, \"input\": {\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}, \"spec\": {\"tasks\": {\"templates_source_check\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"templates_source_check\", \"on-success\": [{\"upload_default_templates\": \"<% $.use_default_templates = true %>\"}, {\"clone_git_repo\": \"<% $.source_url != null %>\"}]}, \"create_plan\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"create_plan\", \"on-success\": [{\"ensure_passwords_exist\": \"<% $.generate_passwords = true %>\"}, {\"add_root_stack_name\": \"<% $.generate_passwords != true %>\"}]}, \"upload_templates_directory_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"upload_templates_directory_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(upload_templates_directory).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"ensure_passwords_exist\": {\"name\": \"ensure_passwords_exist\", \"on-error\": \"ensure_passwords_exist_set_status_failed\", \"on-success\": \"add_root_stack_name\", \"version\": \"2.0\", \"action\": \"tripleo.parameters.generate_passwords container=<% $.container %>\", \"type\": \"direct\"}, \"process_templates_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"process_templates_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(process_templates).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"add_root_stack_name\": {\"name\": \"add_root_stack_name\", \"on-error\": \"notify_zaqar\", \"on-success\": \"container_images_prepare\", \"publish-on-error\": {\"status\": \"FAILED\", \"message\": \"<% task().result %>\"}, \"version\": \"2.0\", \"action\": \"tripleo.parameters.update\", \"input\": {\"container\": \"<% $.container %>\", \"parameters\": {\"RootStackName\": \"<% $.container %>\"}}, \"type\": \"direct\"}, \"process_templates\": {\"name\": \"process_templates\", \"on-error\": \"process_templates_set_status_failed\", \"on-success\": \"set_status_success\", \"version\": \"2.0\", \"action\": \"tripleo.templates.process container=<% $.container %>\", \"type\": \"direct\"}, \"container_required_check\": {\"version\": \"2.0\", \"type\": \"direct\", \"description\": \"If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\n\", \"name\": \"container_required_check\", \"on-success\": [{\"verify_container_doesnt_exist\": \"<% $.use_default_templates or $.source_url %>\"}, {\"create_plan\": \"<% $.use_default_templates = false and $.source_url = null %>\"}]}, \"clone_git_repo\": {\"name\": \"clone_git_repo\", \"on-error\": \"clone_git_repo_set_status_failed\", \"on-success\": \"upload_templates_directory\", \"version\": \"2.0\", \"action\": \"tripleo.git.clone container=<% $.container %> url=<% $.source_url %>\", \"type\": \"direct\"}, \"container_images_prepare\": {\"name\": \"container_images_prepare\", \"on-error\": \"container_images_prepare_set_status_failed\", \"on-success\": \"process_templates\", \"version\": \"2.0\", \"action\": \"tripleo.container_images.prepare container=<% $.container %>\", \"type\": \"direct\", \"description\": \"Populate all container image parameters with default values.\\n\"}, \"upload_default_templates\": {\"name\": \"upload_default_templates\", \"on-error\": \"upload_to_container_set_status_failed\", \"on-success\": \"create_plan\", \"version\": \"2.0\", \"action\": \"tripleo.templates.upload container=<% $.container %>\", \"type\": \"direct\"}, \"clone_git_repo_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"clone_git_repo_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(clone_git_repo).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"upload_to_container_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"upload_to_container_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(upload_default_templates).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"upload_templates_directory\": {\"name\": \"upload_templates_directory\", \"on-error\": \"upload_templates_directory_set_status_failed\", \"on-success\": \"create_plan\", \"version\": \"2.0\", \"action\": \"tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>\", \"on-complete\": \"cleanup_temporary_files\", \"type\": \"direct\"}, \"container_images_prepare_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"container_images_prepare_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(container_images_prepare).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"create_container_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"create_container_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(create_container).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"set_status_success\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"set_status_success\", \"publish\": {\"status\": \"SUCCESS\", \"message\": \"Plan created.\"}, \"on-success\": \"notify_zaqar\"}, \"verify_container_doesnt_exist\": {\"name\": \"verify_container_doesnt_exist\", \"on-error\": \"create_container\", \"on-success\": \"notify_zaqar\", \"publish\": {\"status\": \"FAILED\", \"message\": \"Unable to create plan. The Swift container already exists\"}, \"version\": \"2.0\", \"action\": \"swift.head_container container=<% $.container %>\", \"type\": \"direct\"}, \"notify_zaqar\": {\"retry\": \"count=5 delay=1\", \"name\": \"notify_zaqar\", \"on-success\": [{\"fail\": \"<% $.get('status') = \\\"FAILED\\\" %>\"}], \"version\": \"2.0\", \"action\": \"zaqar.queue_post\", \"input\": {\"queue_name\": \"<% $.queue_name %>\", \"messages\": {\"body\": {\"type\": \"tripleo.plan_management.v1.create_deployment_plan\", \"payload\": {\"status\": \"<% $.status %>\", \"message\": \"<% $.get('message', '') %>\", \"execution\": \"<% execution() %>\"}}}}, \"type\": \"direct\"}, \"cleanup_temporary_files\": {\"action\": \"tripleo.git.clean container=<% $.container %>\", \"version\": \"2.0\", \"type\": \"direct\", \"name\": \"cleanup_temporary_files\"}, \"create_container\": {\"name\": \"create_container\", \"on-error\": \"create_container_set_status_failed\", \"on-success\": \"templates_source_check\", \"version\": \"2.0\", \"action\": \"tripleo.plan.create_container container=<% $.container %>\", \"type\": \"direct\"}, \"ensure_passwords_exist_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"ensure_passwords_exist_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(ensure_passwords_exist).result %>\"}, \"on-success\": \"notify_zaqar\"}}, \"description\": \"This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\n\", \"tags\": [\"tripleo-common-managed\"], \"version\": \"2.0\", \"input\": [\"container\", {\"source_url\": null}, {\"queue_name\": \"tripleo\"}, {\"generate_passwords\": true}, {\"use_default_templates\": false}], \"name\": \"create_deployment_plan\"}}}}}}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "b844ff2c-b34e-4a84-915d-3592a0f7976b", "description": ""}, {"state_info": null, "created_at": "2018-06-26 04:27:49", "accepted": true, "name": "zaqar.queue_post", "tags": null, "workflow_name": "tripleo.plan_management.v1.create_deployment_plan", "task_execution_id": "1a2b25c9-3deb-40dc-90de-d8d2dd80e1ba", "updated_at": "2018-06-26 04:27:50", "state": "ERROR", "workflow_namespace": "", "task_name": "notify_zaqar", "input": "{\"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"messages\": {\"body\": {\"type\": \"tripleo.plan_management.v1.create_deployment_plan\", \"payload\": {\"status\": \"SUCCESS\", \"message\": \"Plan created.\", \"execution\": {\"name\": \"tripleo.plan_management.v1.create_deployment_plan\", \"created_at\": \"2018-06-26 04:26:49\", \"updated_at\": \"2018-06-26 04:26:49\", \"id\": \"3d2a008d-d243-470f-917b-9c07c49e7c3f\", \"params\": {\"namespace\": \"\"}, \"input\": {\"generate_passwords\": true, \"use_default_templates\": true, \"queue_name\": \"2ccee8b6-1fb0-4139-b49a-fcd230d8af95\", \"container\": \"overcloud\", \"source_url\": null}, \"spec\": {\"tasks\": {\"templates_source_check\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"templates_source_check\", \"on-success\": [{\"upload_default_templates\": \"<% $.use_default_templates = true %>\"}, {\"clone_git_repo\": \"<% $.source_url != null %>\"}]}, \"create_plan\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"create_plan\", \"on-success\": [{\"ensure_passwords_exist\": \"<% $.generate_passwords = true %>\"}, {\"add_root_stack_name\": \"<% $.generate_passwords != true %>\"}]}, \"upload_templates_directory_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"upload_templates_directory_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(upload_templates_directory).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"ensure_passwords_exist\": {\"name\": \"ensure_passwords_exist\", \"on-error\": \"ensure_passwords_exist_set_status_failed\", \"on-success\": \"add_root_stack_name\", \"version\": \"2.0\", \"action\": \"tripleo.parameters.generate_passwords container=<% $.container %>\", \"type\": \"direct\"}, \"process_templates_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"process_templates_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(process_templates).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"add_root_stack_name\": {\"name\": \"add_root_stack_name\", \"on-error\": \"notify_zaqar\", \"on-success\": \"container_images_prepare\", \"publish-on-error\": {\"status\": \"FAILED\", \"message\": \"<% task().result %>\"}, \"version\": \"2.0\", \"action\": \"tripleo.parameters.update\", \"input\": {\"container\": \"<% $.container %>\", \"parameters\": {\"RootStackName\": \"<% $.container %>\"}}, \"type\": \"direct\"}, \"process_templates\": {\"name\": \"process_templates\", \"on-error\": \"process_templates_set_status_failed\", \"on-success\": \"set_status_success\", \"version\": \"2.0\", \"action\": \"tripleo.templates.process container=<% $.container %>\", \"type\": \"direct\"}, \"container_required_check\": {\"version\": \"2.0\", \"type\": \"direct\", \"description\": \"If using the default templates or importing templates from a git repository, a new container needs to be created. If using an existing container containing templates, skip straight to create_plan.\\n\", \"name\": \"container_required_check\", \"on-success\": [{\"verify_container_doesnt_exist\": \"<% $.use_default_templates or $.source_url %>\"}, {\"create_plan\": \"<% $.use_default_templates = false and $.source_url = null %>\"}]}, \"clone_git_repo\": {\"name\": \"clone_git_repo\", \"on-error\": \"clone_git_repo_set_status_failed\", \"on-success\": \"upload_templates_directory\", \"version\": \"2.0\", \"action\": \"tripleo.git.clone container=<% $.container %> url=<% $.source_url %>\", \"type\": \"direct\"}, \"container_images_prepare\": {\"name\": \"container_images_prepare\", \"on-error\": \"container_images_prepare_set_status_failed\", \"on-success\": \"process_templates\", \"version\": \"2.0\", \"action\": \"tripleo.container_images.prepare container=<% $.container %>\", \"type\": \"direct\", \"description\": \"Populate all container image parameters with default values.\\n\"}, \"upload_default_templates\": {\"name\": \"upload_default_templates\", \"on-error\": \"upload_to_container_set_status_failed\", \"on-success\": \"create_plan\", \"version\": \"2.0\", \"action\": \"tripleo.templates.upload container=<% $.container %>\", \"type\": \"direct\"}, \"clone_git_repo_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"clone_git_repo_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(clone_git_repo).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"upload_to_container_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"upload_to_container_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(upload_default_templates).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"upload_templates_directory\": {\"name\": \"upload_templates_directory\", \"on-error\": \"upload_templates_directory_set_status_failed\", \"on-success\": \"create_plan\", \"version\": \"2.0\", \"action\": \"tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>\", \"on-complete\": \"cleanup_temporary_files\", \"type\": \"direct\"}, \"container_images_prepare_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"container_images_prepare_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(container_images_prepare).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"create_container_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"create_container_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(create_container).result %>\"}, \"on-success\": \"notify_zaqar\"}, \"set_status_success\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"set_status_success\", \"publish\": {\"status\": \"SUCCESS\", \"message\": \"Plan created.\"}, \"on-success\": \"notify_zaqar\"}, \"verify_container_doesnt_exist\": {\"name\": \"verify_container_doesnt_exist\", \"on-error\": \"create_container\", \"on-success\": \"notify_zaqar\", \"publish\": {\"status\": \"FAILED\", \"message\": \"Unable to create plan. The Swift container already exists\"}, \"version\": \"2.0\", \"action\": \"swift.head_container container=<% $.container %>\", \"type\": \"direct\"}, \"notify_zaqar\": {\"retry\": \"count=5 delay=1\", \"name\": \"notify_zaqar\", \"on-success\": [{\"fail\": \"<% $.get('status') = \\\"FAILED\\\" %>\"}], \"version\": \"2.0\", \"action\": \"zaqar.queue_post\", \"input\": {\"queue_name\": \"<% $.queue_name %>\", \"messages\": {\"body\": {\"type\": \"tripleo.plan_management.v1.create_deployment_plan\", \"payload\": {\"status\": \"<% $.status %>\", \"message\": \"<% $.get('message', '') %>\", \"execution\": \"<% execution() %>\"}}}}, \"type\": \"direct\"}, \"cleanup_temporary_files\": {\"action\": \"tripleo.git.clean container=<% $.container %>\", \"version\": \"2.0\", \"type\": \"direct\", \"name\": \"cleanup_temporary_files\"}, \"create_container\": {\"name\": \"create_container\", \"on-error\": \"create_container_set_status_failed\", \"on-success\": \"templates_source_check\", \"version\": \"2.0\", \"action\": \"tripleo.plan.create_container container=<% $.container %>\", \"type\": \"direct\"}, \"ensure_passwords_exist_set_status_failed\": {\"version\": \"2.0\", \"type\": \"direct\", \"name\": \"ensure_passwords_exist_set_status_failed\", \"publish\": {\"status\": \"FAILED\", \"message\": \"<% task(ensure_passwords_exist).result %>\"}, \"on-success\": \"notify_zaqar\"}}, \"description\": \"This workflow provides the capability to create a deployment plan using the default heat templates provided in a standard TripleO undercloud deployment, heat templates contained in an external git repository, or a swift container that already contains templates.\\n\", \"tags\": [\"tripleo-common-managed\"], \"version\": \"2.0\", \"input\": [\"container\", {\"source_url\": null}, {\"queue_name\": \"tripleo\"}, {\"generate_passwords\": true}, {\"use_default_templates\": false}], \"name\": \"create_deployment_plan\"}}}}}}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "cd52422f-5aec-4880-9286-a3885fbbab1b", "description": ""}]} > >2018-06-26 09:57:54,464 DEBUG: HTTP GET http://192.0.3.1:8989/v2/action_executions 200 >2018-06-26 09:57:54,465 ERROR: ERROR error creating the default Deployment Plan overcloud Check the create_default_deployment_plan execution in Mistral with openstack workflow execution list Mistral execution ID: 3d2a008d-d243-470f-917b-9c07c49e7c3f >2018-06-26 09:57:54,465 DEBUG: An exception occurred >Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 2336, in install > _post_config(instack_env, upgrade) > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 2028, in _post_config > _post_config_mistral(instack_env, mistral, swift) > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 1964, in _post_config_mistral > _create_default_plan(mistral, plans) > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 1906, in _create_default_plan > fail_on_error=True) > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 1843, in _wait_for_mistral_execution > raise RuntimeError(error_message) >RuntimeError: ERROR error creating the default Deployment Plan overcloud Check the create_default_deployment_plan execution in Mistral with openstack workflow execution list Mistral execution ID: 3d2a008d-d243-470f-917b-9c07c49e7c3f >2018-06-26 09:57:54,465 ERROR: >############################################################################# >Undercloud install failed. > >Reason: ERROR error creating the default Deployment Plan overcloud Check the create_default_deployment_plan execution in Mistral with openstack workflow execution list Mistral execution ID: 3d2a008d-d243-470f-917b-9c07c49e7c3f > >See the previous output for details about what went wrong. The full install >log can be found at /home/sudheer/.instack/install-undercloud.log. > >############################################################################# > >2018-06-26 11:09:08,233 INFO: Logging to /home/sudheer/.instack/install-undercloud.log >2018-06-26 11:09:08,281 INFO: Checking for a FQDN hostname... >2018-06-26 11:09:08,336 INFO: Static hostname detected as facebook.local.com >2018-06-26 11:09:08,354 INFO: Transient hostname detected as facebook.local.com >2018-06-26 11:09:08,356 WARNING: Option "undercloud_public_vip" from group "DEFAULT" is deprecated. Use option "undercloud_public_host" from group "DEFAULT". >2018-06-26 11:09:08,356 WARNING: Option "undercloud_admin_vip" from group "DEFAULT" is deprecated. Use option "undercloud_admin_host" from group "DEFAULT". >2018-06-26 11:09:08,356 WARNING: Option "masquerade_network" from group "DEFAULT" is deprecated for removal (With support for routed networks, masquerading of the provisioning networks is moved to a boolean option for each subnet.). Its value may be silently ignored in the future. >2018-06-26 11:09:08,357 WARNING: Option "network_cidr" from group "DEFAULT" is deprecated. Use option "cidr" from group "ctlplane-subnet". >2018-06-26 11:09:08,357 WARNING: Option "dhcp_start" from group "DEFAULT" is deprecated. Use option "dhcp_start" from group "ctlplane-subnet". >2018-06-26 11:09:08,357 WARNING: Option "dhcp_end" from group "DEFAULT" is deprecated. Use option "dhcp_end" from group "ctlplane-subnet". >2018-06-26 11:09:08,357 WARNING: Option "inspection_iprange" from group "DEFAULT" is deprecated. Use option "inspection_iprange" from group "ctlplane-subnet". >2018-06-26 11:09:08,357 WARNING: Option "network_gateway" from group "DEFAULT" is deprecated. Use option "gateway" from group "ctlplane-subnet". >2018-06-26 11:09:08,397 INFO: Running yum clean all >2018-06-26 11:09:08,574 INFO: Loaded plugins: langpacks, product-id, search-disabled-repos, subscription- >2018-06-26 11:09:08,575 INFO: : manager >2018-06-26 11:09:12,943 INFO: Cleaning repos: rhel-7-server-extras-rpms rhel-7-server-openstack-beta-rpms >2018-06-26 11:09:12,943 INFO: : rhel-7-server-rh-common-rpms rhel-7-server-rpms >2018-06-26 11:09:12,943 INFO: : rhel-ha-for-rhel-7-server-rpms >2018-06-26 11:09:12,943 INFO: Cleaning up everything >2018-06-26 11:09:12,943 INFO: Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos >2018-06-26 11:09:13,034 INFO: yum-clean-all completed successfully >2018-06-26 11:09:13,034 INFO: Running yum update >2018-06-26 11:09:13,215 INFO: Loaded plugins: langpacks, product-id, search-disabled-repos, subscription- >2018-06-26 11:09:13,215 INFO: : manager >2018-06-26 11:09:37,582 INFO: No packages marked for update >2018-06-26 11:09:37,649 INFO: yum-update completed successfully >2018-06-26 11:09:37,680 INFO: Running instack >2018-06-26 11:09:37,839 INFO: INFO: 2018-06-26 11:09:37,839 -- Starting run of instack >2018-06-26 11:09:37,846 INFO: INFO: 2018-06-26 11:09:37,845 -- Using json file: /usr/share/instack-undercloud/json-files/rhel-7-undercloud-packages.json >2018-06-26 11:09:37,846 INFO: INFO: 2018-06-26 11:09:37,846 -- Running Installation >2018-06-26 11:09:37,846 INFO: INFO: 2018-06-26 11:09:37,846 -- Initialized with elements path: /usr/share/tripleo-puppet-elements /usr/share/instack-undercloud /usr/share/tripleo-image-elements /usr/share/diskimage-builder/elements >2018-06-26 11:09:37,856 INFO: WARNING: 2018-06-26 11:09:37,856 -- expand_dependencies() deprecated, use get_elements >2018-06-26 11:09:37,872 INFO: INFO: 2018-06-26 11:09:37,871 -- List of all elements and dependencies: undercloud-install dib-python source-repositories install-types puppet-modules install-bin pip-manifest puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url pkg-map enable-packages-install puppet os-apply-config hiera package-installs >2018-06-26 11:09:37,872 INFO: INFO: 2018-06-26 11:09:37,871 -- Excluding element pip-and-virtualenv >2018-06-26 11:09:37,872 INFO: INFO: 2018-06-26 11:09:37,872 -- Excluding element pip-manifest >2018-06-26 11:09:37,872 INFO: INFO: 2018-06-26 11:09:37,872 -- Excluding element package-installs >2018-06-26 11:09:37,872 INFO: INFO: 2018-06-26 11:09:37,872 -- Excluding element pkg-map >2018-06-26 11:09:37,872 INFO: INFO: 2018-06-26 11:09:37,872 -- Excluding element puppet >2018-06-26 11:09:37,872 INFO: INFO: 2018-06-26 11:09:37,872 -- Excluding element cache-url >2018-06-26 11:09:37,873 INFO: INFO: 2018-06-26 11:09:37,872 -- Excluding element dib-python >2018-06-26 11:09:37,873 INFO: INFO: 2018-06-26 11:09:37,872 -- Excluding element install-bin >2018-06-26 11:09:37,873 INFO: INFO: 2018-06-26 11:09:37,872 -- List of all elements and dependencies after excludes: undercloud-install source-repositories install-types puppet-modules puppet-stack-config os-refresh-config element-manifest manifests enable-packages-install os-apply-config hiera >2018-06-26 11:09:38,011 INFO: INFO: 2018-06-26 11:09:38,011 -- Running hook extra-data >2018-06-26 11:09:38,011 INFO: INFO: 2018-06-26 11:09:38,011 -- ############### Begin stdout/stderr logging ############### >2018-06-26 11:09:38,022 INFO: dib-run-parts Sourcing environment file /tmp/tmpDqgrg7/extra-data.d/../environment.d/00-dib-v2-env >2018-06-26 11:09:38,024 INFO: + source /tmp/tmpDqgrg7/extra-data.d/../environment.d/00-dib-v2-env >2018-06-26 11:09:38,024 INFO: ++ export 'IMAGE_ELEMENT=undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs' >2018-06-26 11:09:38,025 INFO: ++ IMAGE_ELEMENT='undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs' >2018-06-26 11:09:38,025 INFO: ++ export 'IMAGE_ELEMENT_YAML={cache-url: /usr/share/diskimage-builder/elements/cache-url, dib-python: /usr/share/diskimage-builder/elements/dib-python, >2018-06-26 11:09:38,025 INFO: element-manifest: /usr/share/diskimage-builder/elements/element-manifest, enable-packages-install: /usr/share/tripleo-image-elements/enable-packages-install, >2018-06-26 11:09:38,025 INFO: hiera: /usr/share/tripleo-puppet-elements/hiera, install-bin: /usr/share/diskimage-builder/elements/install-bin, >2018-06-26 11:09:38,025 INFO: install-types: /usr/share/diskimage-builder/elements/install-types, manifests: /usr/share/diskimage-builder/elements/manifests, >2018-06-26 11:09:38,026 INFO: os-apply-config: /usr/share/tripleo-image-elements/os-apply-config, os-refresh-config: /usr/share/tripleo-image-elements/os-refresh-config, >2018-06-26 11:09:38,026 INFO: package-installs: /usr/share/diskimage-builder/elements/package-installs, pip-and-virtualenv: /usr/share/diskimage-builder/elements/pip-and-virtualenv, >2018-06-26 11:09:38,026 INFO: pip-manifest: /usr/share/tripleo-image-elements/pip-manifest, pkg-map: /usr/share/diskimage-builder/elements/pkg-map, >2018-06-26 11:09:38,026 INFO: puppet: /usr/share/tripleo-puppet-elements/puppet, puppet-modules: /usr/share/tripleo-puppet-elements/puppet-modules, >2018-06-26 11:09:38,026 INFO: puppet-stack-config: /usr/share/instack-undercloud/puppet-stack-config, source-repositories: /usr/share/diskimage-builder/elements/source-repositories, >2018-06-26 11:09:38,026 INFO: undercloud-install: /usr/share/instack-undercloud/undercloud-install} >2018-06-26 11:09:38,026 INFO: ' >2018-06-26 11:09:38,027 INFO: ++ IMAGE_ELEMENT_YAML='{cache-url: /usr/share/diskimage-builder/elements/cache-url, dib-python: /usr/share/diskimage-builder/elements/dib-python, >2018-06-26 11:09:38,027 INFO: element-manifest: /usr/share/diskimage-builder/elements/element-manifest, enable-packages-install: /usr/share/tripleo-image-elements/enable-packages-install, >2018-06-26 11:09:38,027 INFO: hiera: /usr/share/tripleo-puppet-elements/hiera, install-bin: /usr/share/diskimage-builder/elements/install-bin, >2018-06-26 11:09:38,027 INFO: install-types: /usr/share/diskimage-builder/elements/install-types, manifests: /usr/share/diskimage-builder/elements/manifests, >2018-06-26 11:09:38,027 INFO: os-apply-config: /usr/share/tripleo-image-elements/os-apply-config, os-refresh-config: /usr/share/tripleo-image-elements/os-refresh-config, >2018-06-26 11:09:38,027 INFO: package-installs: /usr/share/diskimage-builder/elements/package-installs, pip-and-virtualenv: /usr/share/diskimage-builder/elements/pip-and-virtualenv, >2018-06-26 11:09:38,028 INFO: pip-manifest: /usr/share/tripleo-image-elements/pip-manifest, pkg-map: /usr/share/diskimage-builder/elements/pkg-map, >2018-06-26 11:09:38,028 INFO: puppet: /usr/share/tripleo-puppet-elements/puppet, puppet-modules: /usr/share/tripleo-puppet-elements/puppet-modules, >2018-06-26 11:09:38,028 INFO: puppet-stack-config: /usr/share/instack-undercloud/puppet-stack-config, source-repositories: /usr/share/diskimage-builder/elements/source-repositories, >2018-06-26 11:09:38,028 INFO: undercloud-install: /usr/share/instack-undercloud/undercloud-install} >2018-06-26 11:09:38,028 INFO: ' >2018-06-26 11:09:38,028 INFO: ++ export -f get_image_element_array >2018-06-26 11:09:38,028 INFO: + set +o xtrace >2018-06-26 11:09:38,029 INFO: dib-run-parts Sourcing environment file /tmp/tmpDqgrg7/extra-data.d/../environment.d/01-export-install-types.bash >2018-06-26 11:09:38,029 INFO: + source /tmp/tmpDqgrg7/extra-data.d/../environment.d/01-export-install-types.bash >2018-06-26 11:09:38,029 INFO: ++ export DIB_DEFAULT_INSTALLTYPE=package >2018-06-26 11:09:38,029 INFO: ++ DIB_DEFAULT_INSTALLTYPE=package >2018-06-26 11:09:38,029 INFO: + set +o xtrace >2018-06-26 11:09:38,029 INFO: dib-run-parts Sourcing environment file /tmp/tmpDqgrg7/extra-data.d/../environment.d/01-puppet-module-pins.sh >2018-06-26 11:09:38,029 INFO: + source /tmp/tmpDqgrg7/extra-data.d/../environment.d/01-puppet-module-pins.sh >2018-06-26 11:09:38,029 INFO: ++ export DIB_REPOREF_puppetlabs_ntp=4.2.x >2018-06-26 11:09:38,030 INFO: ++ DIB_REPOREF_puppetlabs_ntp=4.2.x >2018-06-26 11:09:38,030 INFO: + set +o xtrace >2018-06-26 11:09:38,030 INFO: dib-run-parts Sourcing environment file /tmp/tmpDqgrg7/extra-data.d/../environment.d/02-puppet-modules-install-types.sh >2018-06-26 11:09:38,031 INFO: + source /tmp/tmpDqgrg7/extra-data.d/../environment.d/02-puppet-modules-install-types.sh >2018-06-26 11:09:38,031 INFO: ++ DIB_DEFAULT_INSTALLTYPE=package >2018-06-26 11:09:38,031 INFO: ++ DIB_INSTALLTYPE_puppet_modules=package >2018-06-26 11:09:38,031 INFO: ++ '[' package = source ']' >2018-06-26 11:09:38,031 INFO: + set +o xtrace >2018-06-26 11:09:38,031 INFO: dib-run-parts Sourcing environment file /tmp/tmpDqgrg7/extra-data.d/../environment.d/10-os-apply-config-venv-dir.bash >2018-06-26 11:09:38,034 INFO: + source /tmp/tmpDqgrg7/extra-data.d/../environment.d/10-os-apply-config-venv-dir.bash >2018-06-26 11:09:38,034 INFO: ++ '[' -z '' ']' >2018-06-26 11:09:38,034 INFO: ++ export OS_APPLY_CONFIG_VENV_DIR=/opt/stack/venvs/os-apply-config >2018-06-26 11:09:38,034 INFO: ++ OS_APPLY_CONFIG_VENV_DIR=/opt/stack/venvs/os-apply-config >2018-06-26 11:09:38,034 INFO: + set +o xtrace >2018-06-26 11:09:38,034 INFO: dib-run-parts Sourcing environment file /tmp/tmpDqgrg7/extra-data.d/../environment.d/14-manifests >2018-06-26 11:09:38,036 INFO: + source /tmp/tmpDqgrg7/extra-data.d/../environment.d/14-manifests >2018-06-26 11:09:38,036 INFO: ++ export DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests >2018-06-26 11:09:38,036 INFO: ++ DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests >2018-06-26 11:09:38,036 INFO: ++ export DIB_MANIFEST_SAVE_DIR=instack.d/ >2018-06-26 11:09:38,037 INFO: ++ DIB_MANIFEST_SAVE_DIR=instack.d/ >2018-06-26 11:09:38,037 INFO: + set +o xtrace >2018-06-26 11:09:38,037 INFO: dib-run-parts Running /tmp/tmpDqgrg7/extra-data.d/10-install-git >2018-06-26 11:09:38,039 INFO: + yum -y install git >2018-06-26 11:09:38,196 INFO: Loaded plugins: langpacks, product-id, search-disabled-repos, subscription- >2018-06-26 11:09:38,196 INFO: : manager >2018-06-26 11:09:42,552 INFO: Package git-1.8.3.1-14.el7_5.x86_64 already installed and latest version >2018-06-26 11:09:42,552 INFO: Nothing to do >2018-06-26 11:09:42,610 INFO: dib-run-parts 10-install-git completed >2018-06-26 11:09:42,610 INFO: dib-run-parts Running /tmp/tmpDqgrg7/extra-data.d/20-manifest-dir >2018-06-26 11:09:42,613 INFO: + set -eu >2018-06-26 11:09:42,613 INFO: + set -o pipefail >2018-06-26 11:09:42,613 INFO: + sudo mkdir -p /tmp/instack.aSFqnu/mnt//etc/dib-manifests >2018-06-26 11:09:42,630 INFO: dib-run-parts 20-manifest-dir completed >2018-06-26 11:09:42,630 INFO: dib-run-parts Running /tmp/tmpDqgrg7/extra-data.d/75-inject-element-manifest >2018-06-26 11:09:42,632 INFO: + set -eu >2018-06-26 11:09:42,632 INFO: + set -o pipefail >2018-06-26 11:09:42,632 INFO: + DIB_ELEMENT_MANIFEST_PATH=/etc/dib-manifests/dib-element-manifest >2018-06-26 11:09:42,633 INFO: ++ dirname /etc/dib-manifests/dib-element-manifest >2018-06-26 11:09:42,634 INFO: + sudo mkdir -p /tmp/instack.aSFqnu/mnt//etc/dib-manifests >2018-06-26 11:09:42,648 INFO: + sudo /bin/bash -c 'echo undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs | tr '\'' '\'' '\''\n'\'' > /tmp/instack.aSFqnu/mnt//etc/dib-manifests/dib-element-manifest' >2018-06-26 11:09:42,662 INFO: dib-run-parts 75-inject-element-manifest completed >2018-06-26 11:09:42,662 INFO: dib-run-parts Running /tmp/tmpDqgrg7/extra-data.d/98-source-repositories >2018-06-26 11:09:42,674 INFO: Getting /root/.cache/image-create/source-repositories/repositories_flock: Tue Jun 26 11:09:42 IST 2018 for /tmp/tmpDqgrg7/source-repository-puppet-modules >2018-06-26 11:09:42,678 INFO: (0001 / 0081) >2018-06-26 11:09:42,682 INFO: puppetlabs-apache install type not set to source >2018-06-26 11:09:42,682 INFO: (0002 / 0081) >2018-06-26 11:09:42,686 INFO: puppet-aodh install type not set to source >2018-06-26 11:09:42,687 INFO: (0003 / 0081) >2018-06-26 11:09:42,691 INFO: puppet-auditd install type not set to source >2018-06-26 11:09:42,692 INFO: (0004 / 0081) >2018-06-26 11:09:42,696 INFO: puppet-barbican install type not set to source >2018-06-26 11:09:42,696 INFO: (0005 / 0081) >2018-06-26 11:09:42,700 INFO: puppet-cassandra install type not set to source >2018-06-26 11:09:42,701 INFO: (0006 / 0081) >2018-06-26 11:09:42,705 INFO: puppet-ceph install type not set to source >2018-06-26 11:09:42,705 INFO: (0007 / 0081) >2018-06-26 11:09:42,709 INFO: puppet-ceilometer install type not set to source >2018-06-26 11:09:42,710 INFO: (0008 / 0081) >2018-06-26 11:09:42,714 INFO: puppet-congress install type not set to source >2018-06-26 11:09:42,714 INFO: (0009 / 0081) >2018-06-26 11:09:42,718 INFO: puppet-gnocchi install type not set to source >2018-06-26 11:09:42,719 INFO: (0010 / 0081) >2018-06-26 11:09:42,723 INFO: puppet-certmonger install type not set to source >2018-06-26 11:09:42,724 INFO: (0011 / 0081) >2018-06-26 11:09:42,728 INFO: puppet-cinder install type not set to source >2018-06-26 11:09:42,729 INFO: (0012 / 0081) >2018-06-26 11:09:42,732 INFO: puppet-common install type not set to source >2018-06-26 11:09:42,733 INFO: (0013 / 0081) >2018-06-26 11:09:42,736 INFO: puppet-contrail install type not set to source >2018-06-26 11:09:42,737 INFO: (0014 / 0081) >2018-06-26 11:09:42,741 INFO: puppetlabs-concat install type not set to source >2018-06-26 11:09:42,742 INFO: (0015 / 0081) >2018-06-26 11:09:42,746 INFO: puppetlabs-firewall install type not set to source >2018-06-26 11:09:42,746 INFO: (0016 / 0081) >2018-06-26 11:09:42,750 INFO: puppet-glance install type not set to source >2018-06-26 11:09:42,751 INFO: (0017 / 0081) >2018-06-26 11:09:42,754 INFO: puppet-gluster install type not set to source >2018-06-26 11:09:42,755 INFO: (0018 / 0081) >2018-06-26 11:09:42,759 INFO: puppetlabs-haproxy install type not set to source >2018-06-26 11:09:42,760 INFO: (0019 / 0081) >2018-06-26 11:09:42,764 INFO: puppet-heat install type not set to source >2018-06-26 11:09:42,765 INFO: (0020 / 0081) >2018-06-26 11:09:42,768 INFO: puppet-healthcheck install type not set to source >2018-06-26 11:09:42,769 INFO: (0021 / 0081) >2018-06-26 11:09:42,773 INFO: puppet-horizon install type not set to source >2018-06-26 11:09:42,774 INFO: (0022 / 0081) >2018-06-26 11:09:42,777 INFO: puppetlabs-inifile install type not set to source >2018-06-26 11:09:42,778 INFO: (0023 / 0081) >2018-06-26 11:09:42,782 INFO: puppet-kafka install type not set to source >2018-06-26 11:09:42,782 INFO: (0024 / 0081) >2018-06-26 11:09:42,786 INFO: puppet-keystone install type not set to source >2018-06-26 11:09:42,787 INFO: (0025 / 0081) >2018-06-26 11:09:42,790 INFO: puppet-manila install type not set to source >2018-06-26 11:09:42,791 INFO: (0026 / 0081) >2018-06-26 11:09:42,795 INFO: puppet-memcached install type not set to source >2018-06-26 11:09:42,795 INFO: (0027 / 0081) >2018-06-26 11:09:42,799 INFO: puppet-mistral install type not set to source >2018-06-26 11:09:42,800 INFO: (0028 / 0081) >2018-06-26 11:09:42,804 INFO: puppetlabs-mongodb install type not set to source >2018-06-26 11:09:42,804 INFO: (0029 / 0081) >2018-06-26 11:09:42,808 INFO: puppetlabs-mysql install type not set to source >2018-06-26 11:09:42,809 INFO: (0030 / 0081) >2018-06-26 11:09:42,813 INFO: puppet-neutron install type not set to source >2018-06-26 11:09:42,814 INFO: (0031 / 0081) >2018-06-26 11:09:42,817 INFO: puppet-nova install type not set to source >2018-06-26 11:09:42,818 INFO: (0032 / 0081) >2018-06-26 11:09:42,822 INFO: puppet-octavia install type not set to source >2018-06-26 11:09:42,823 INFO: (0033 / 0081) >2018-06-26 11:09:42,826 INFO: puppet-oslo install type not set to source >2018-06-26 11:09:42,827 INFO: (0034 / 0081) >2018-06-26 11:09:42,831 INFO: puppet-nssdb install type not set to source >2018-06-26 11:09:42,831 INFO: (0035 / 0081) >2018-06-26 11:09:42,835 INFO: puppet-opendaylight install type not set to source >2018-06-26 11:09:42,836 INFO: (0036 / 0081) >2018-06-26 11:09:42,839 INFO: puppet-ovn install type not set to source >2018-06-26 11:09:42,840 INFO: (0037 / 0081) >2018-06-26 11:09:42,844 INFO: puppet-panko install type not set to source >2018-06-26 11:09:42,844 INFO: (0038 / 0081) >2018-06-26 11:09:42,848 INFO: puppet-puppet install type not set to source >2018-06-26 11:09:42,849 INFO: (0039 / 0081) >2018-06-26 11:09:42,852 INFO: puppetlabs-rabbitmq install type not set to source >2018-06-26 11:09:42,853 INFO: (0040 / 0081) >2018-06-26 11:09:42,857 INFO: puppet-redis install type not set to source >2018-06-26 11:09:42,858 INFO: (0041 / 0081) >2018-06-26 11:09:42,861 INFO: puppetlabs-rsync install type not set to source >2018-06-26 11:09:42,862 INFO: (0042 / 0081) >2018-06-26 11:09:42,866 INFO: puppet-sahara install type not set to source >2018-06-26 11:09:42,867 INFO: (0043 / 0081) >2018-06-26 11:09:42,870 INFO: sensu-puppet install type not set to source >2018-06-26 11:09:42,871 INFO: (0044 / 0081) >2018-06-26 11:09:42,875 INFO: puppet-tacker install type not set to source >2018-06-26 11:09:42,876 INFO: (0045 / 0081) >2018-06-26 11:09:42,879 INFO: puppet-trove install type not set to source >2018-06-26 11:09:42,880 INFO: (0046 / 0081) >2018-06-26 11:09:42,884 INFO: puppet-ssh install type not set to source >2018-06-26 11:09:42,884 INFO: (0047 / 0081) >2018-06-26 11:09:42,888 INFO: puppet-staging install type not set to source >2018-06-26 11:09:42,889 INFO: (0048 / 0081) >2018-06-26 11:09:42,893 INFO: puppetlabs-stdlib install type not set to source >2018-06-26 11:09:42,894 INFO: (0049 / 0081) >2018-06-26 11:09:42,897 INFO: puppet-swift install type not set to source >2018-06-26 11:09:42,898 INFO: (0050 / 0081) >2018-06-26 11:09:42,902 INFO: puppetlabs-sysctl install type not set to source >2018-06-26 11:09:42,902 INFO: (0051 / 0081) >2018-06-26 11:09:42,906 INFO: puppet-timezone install type not set to source >2018-06-26 11:09:42,907 INFO: (0052 / 0081) >2018-06-26 11:09:42,910 INFO: puppet-uchiwa install type not set to source >2018-06-26 11:09:42,911 INFO: (0053 / 0081) >2018-06-26 11:09:42,915 INFO: puppetlabs-vcsrepo install type not set to source >2018-06-26 11:09:42,916 INFO: (0054 / 0081) >2018-06-26 11:09:42,919 INFO: puppet-vlan install type not set to source >2018-06-26 11:09:42,920 INFO: (0055 / 0081) >2018-06-26 11:09:42,923 INFO: puppet-vswitch install type not set to source >2018-06-26 11:09:42,924 INFO: (0056 / 0081) >2018-06-26 11:09:42,928 INFO: puppetlabs-xinetd install type not set to source >2018-06-26 11:09:42,928 INFO: (0057 / 0081) >2018-06-26 11:09:42,932 INFO: puppet-zookeeper install type not set to source >2018-06-26 11:09:42,933 INFO: (0058 / 0081) >2018-06-26 11:09:42,937 INFO: puppet-openstacklib install type not set to source >2018-06-26 11:09:42,937 INFO: (0059 / 0081) >2018-06-26 11:09:42,941 INFO: puppet-module-keepalived install type not set to source >2018-06-26 11:09:42,942 INFO: (0060 / 0081) >2018-06-26 11:09:42,945 INFO: puppetlabs-ntp install type not set to source >2018-06-26 11:09:42,946 INFO: (0061 / 0081) >2018-06-26 11:09:42,950 INFO: puppet-snmp install type not set to source >2018-06-26 11:09:42,950 INFO: (0062 / 0081) >2018-06-26 11:09:42,954 INFO: puppet-tripleo install type not set to source >2018-06-26 11:09:42,955 INFO: (0063 / 0081) >2018-06-26 11:09:42,958 INFO: puppet-ironic install type not set to source >2018-06-26 11:09:42,959 INFO: (0064 / 0081) >2018-06-26 11:09:42,963 INFO: puppet-ipaclient install type not set to source >2018-06-26 11:09:42,963 INFO: (0065 / 0081) >2018-06-26 11:09:42,967 INFO: puppetlabs-corosync install type not set to source >2018-06-26 11:09:42,968 INFO: (0066 / 0081) >2018-06-26 11:09:42,971 INFO: puppet-pacemaker install type not set to source >2018-06-26 11:09:42,972 INFO: (0067 / 0081) >2018-06-26 11:09:42,976 INFO: puppet_aviator install type not set to source >2018-06-26 11:09:42,976 INFO: (0068 / 0081) >2018-06-26 11:09:42,980 INFO: puppet-openstack_extras install type not set to source >2018-06-26 11:09:42,981 INFO: (0069 / 0081) >2018-06-26 11:09:42,984 INFO: konstantin-fluentd install type not set to source >2018-06-26 11:09:42,985 INFO: (0070 / 0081) >2018-06-26 11:09:42,989 INFO: puppet-elasticsearch install type not set to source >2018-06-26 11:09:42,989 INFO: (0071 / 0081) >2018-06-26 11:09:42,993 INFO: puppet-kibana3 install type not set to source >2018-06-26 11:09:42,994 INFO: (0072 / 0081) >2018-06-26 11:09:42,997 INFO: puppetlabs-git install type not set to source >2018-06-26 11:09:42,998 INFO: (0073 / 0081) >2018-06-26 11:09:43,002 INFO: puppet-datacat install type not set to source >2018-06-26 11:09:43,002 INFO: (0074 / 0081) >2018-06-26 11:09:43,006 INFO: puppet-kmod install type not set to source >2018-06-26 11:09:43,007 INFO: (0075 / 0081) >2018-06-26 11:09:43,010 INFO: puppet-zaqar install type not set to source >2018-06-26 11:09:43,011 INFO: (0076 / 0081) >2018-06-26 11:09:43,014 INFO: puppet-ec2api install type not set to source >2018-06-26 11:09:43,015 INFO: (0077 / 0081) >2018-06-26 11:09:43,019 INFO: puppet-qdr install type not set to source >2018-06-26 11:09:43,019 INFO: (0078 / 0081) >2018-06-26 11:09:43,023 INFO: puppet-systemd install type not set to source >2018-06-26 11:09:43,024 INFO: (0079 / 0081) >2018-06-26 11:09:43,027 INFO: puppet-etcd install type not set to source >2018-06-26 11:09:43,028 INFO: (0080 / 0081) >2018-06-26 11:09:43,032 INFO: puppet-veritas_hyperscale install type not set to source >2018-06-26 11:09:43,032 INFO: (0081 / 0081) >2018-06-26 11:09:43,036 INFO: puppet-ptp install type not set to source >2018-06-26 11:09:43,038 INFO: dib-run-parts 98-source-repositories completed >2018-06-26 11:09:43,038 INFO: dib-run-parts Running /tmp/tmpDqgrg7/extra-data.d/99-enable-install-types >2018-06-26 11:09:43,040 INFO: + set -eu >2018-06-26 11:09:43,041 INFO: + set -o pipefail >2018-06-26 11:09:43,041 INFO: + declare -a SPECIFIED_ELEMS >2018-06-26 11:09:43,041 INFO: + SPECIFIED_ELEMS[0]= >2018-06-26 11:09:43,041 INFO: + PREFIX=DIB_INSTALLTYPE_ >2018-06-26 11:09:43,041 INFO: ++ env >2018-06-26 11:09:43,041 INFO: ++ grep '^DIB_INSTALLTYPE_' >2018-06-26 11:09:43,042 INFO: ++ cut -d= -f1 >2018-06-26 11:09:43,043 INFO: ++ echo '' >2018-06-26 11:09:43,043 INFO: + INSTALL_TYPE_VARS= >2018-06-26 11:09:43,043 INFO: ++ find /tmp/tmpDqgrg7/install.d -maxdepth 1 -name '*-package-install' -type d >2018-06-26 11:09:43,045 INFO: + default_install_type_dirs=/tmp/tmpDqgrg7/install.d/puppet-modules-package-install >2018-06-26 11:09:43,045 INFO: + for _install_dir in '$default_install_type_dirs' >2018-06-26 11:09:43,045 INFO: + SUFFIX=-package-install >2018-06-26 11:09:43,045 INFO: ++ basename /tmp/tmpDqgrg7/install.d/puppet-modules-package-install >2018-06-26 11:09:43,046 INFO: + _install_dir=puppet-modules-package-install >2018-06-26 11:09:43,046 INFO: + INSTALLDIRPREFIX=puppet-modules >2018-06-26 11:09:43,046 INFO: + found=0 >2018-06-26 11:09:43,046 INFO: + '[' 0 = 0 ']' >2018-06-26 11:09:43,046 INFO: + pushd /tmp/tmpDqgrg7/install.d >2018-06-26 11:09:43,046 INFO: /tmp/tmpDqgrg7/install.d /home/sudheer >2018-06-26 11:09:43,047 INFO: + ln -sf puppet-modules-package-install/75-puppet-modules-package . >2018-06-26 11:09:43,047 INFO: + popd >2018-06-26 11:09:43,047 INFO: /home/sudheer >2018-06-26 11:09:43,049 INFO: dib-run-parts 99-enable-install-types completed >2018-06-26 11:09:43,049 INFO: dib-run-parts ----------------------- PROFILING ----------------------- >2018-06-26 11:09:43,049 INFO: dib-run-parts >2018-06-26 11:09:43,050 INFO: dib-run-parts Target: extra-data.d >2018-06-26 11:09:43,050 INFO: dib-run-parts >2018-06-26 11:09:43,050 INFO: dib-run-parts Script Seconds >2018-06-26 11:09:43,050 INFO: dib-run-parts --------------------------------------- ---------- >2018-06-26 11:09:43,050 INFO: dib-run-parts >2018-06-26 11:09:43,056 INFO: dib-run-parts 10-install-git 4.572 >2018-06-26 11:09:43,060 INFO: dib-run-parts 20-manifest-dir 0.018 >2018-06-26 11:09:43,064 INFO: dib-run-parts 75-inject-element-manifest 0.031 >2018-06-26 11:09:43,068 INFO: dib-run-parts 98-source-repositories 0.375 >2018-06-26 11:09:43,072 INFO: dib-run-parts 99-enable-install-types 0.010 >2018-06-26 11:09:43,074 INFO: dib-run-parts >2018-06-26 11:09:43,074 INFO: dib-run-parts --------------------- END PROFILING --------------------- >2018-06-26 11:09:43,074 INFO: INFO: 2018-06-26 11:09:43,074 -- ############### End stdout/stderr logging ############### >2018-06-26 11:09:43,074 INFO: INFO: 2018-06-26 11:09:43,074 -- Running hook pre-install >2018-06-26 11:09:43,075 INFO: INFO: 2018-06-26 11:09:43,074 -- Skipping hook pre-install, the hook directory doesn't exist at /tmp/tmpDqgrg7/pre-install.d >2018-06-26 11:09:43,075 INFO: INFO: 2018-06-26 11:09:43,075 -- Running hook install >2018-06-26 11:09:43,075 INFO: INFO: 2018-06-26 11:09:43,075 -- ############### Begin stdout/stderr logging ############### >2018-06-26 11:09:43,085 INFO: dib-run-parts Sourcing environment file /tmp/tmpDqgrg7/install.d/../environment.d/00-dib-v2-env >2018-06-26 11:09:43,087 INFO: + source /tmp/tmpDqgrg7/install.d/../environment.d/00-dib-v2-env >2018-06-26 11:09:43,087 INFO: ++ export 'IMAGE_ELEMENT=undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs' >2018-06-26 11:09:43,087 INFO: ++ IMAGE_ELEMENT='undercloud-install dib-python source-repositories install-types install-bin pip-manifest pkg-map puppet-stack-config os-refresh-config element-manifest manifests pip-and-virtualenv cache-url puppet enable-packages-install puppet-modules os-apply-config hiera package-installs' >2018-06-26 11:09:43,088 INFO: ++ export 'IMAGE_ELEMENT_YAML={cache-url: /usr/share/diskimage-builder/elements/cache-url, dib-python: /usr/share/diskimage-builder/elements/dib-python, >2018-06-26 11:09:43,088 INFO: element-manifest: /usr/share/diskimage-builder/elements/element-manifest, enable-packages-install: /usr/share/tripleo-image-elements/enable-packages-install, >2018-06-26 11:09:43,088 INFO: hiera: /usr/share/tripleo-puppet-elements/hiera, install-bin: /usr/share/diskimage-builder/elements/install-bin, >2018-06-26 11:09:43,088 INFO: install-types: /usr/share/diskimage-builder/elements/install-types, manifests: /usr/share/diskimage-builder/elements/manifests, >2018-06-26 11:09:43,088 INFO: os-apply-config: /usr/share/tripleo-image-elements/os-apply-config, os-refresh-config: /usr/share/tripleo-image-elements/os-refresh-config, >2018-06-26 11:09:43,088 INFO: package-installs: /usr/share/diskimage-builder/elements/package-installs, pip-and-virtualenv: /usr/share/diskimage-builder/elements/pip-and-virtualenv, >2018-06-26 11:09:43,089 INFO: pip-manifest: /usr/share/tripleo-image-elements/pip-manifest, pkg-map: /usr/share/diskimage-builder/elements/pkg-map, >2018-06-26 11:09:43,089 INFO: puppet: /usr/share/tripleo-puppet-elements/puppet, puppet-modules: /usr/share/tripleo-puppet-elements/puppet-modules, >2018-06-26 11:09:43,089 INFO: puppet-stack-config: /usr/share/instack-undercloud/puppet-stack-config, source-repositories: /usr/share/diskimage-builder/elements/source-repositories, >2018-06-26 11:09:43,089 INFO: undercloud-install: /usr/share/instack-undercloud/undercloud-install} >2018-06-26 11:09:43,089 INFO: ' >2018-06-26 11:09:43,089 INFO: ++ IMAGE_ELEMENT_YAML='{cache-url: /usr/share/diskimage-builder/elements/cache-url, dib-python: /usr/share/diskimage-builder/elements/dib-python, >2018-06-26 11:09:43,089 INFO: element-manifest: /usr/share/diskimage-builder/elements/element-manifest, enable-packages-install: /usr/share/tripleo-image-elements/enable-packages-install, >2018-06-26 11:09:43,090 INFO: hiera: /usr/share/tripleo-puppet-elements/hiera, install-bin: /usr/share/diskimage-builder/elements/install-bin, >2018-06-26 11:09:43,090 INFO: install-types: /usr/share/diskimage-builder/elements/install-types, manifests: /usr/share/diskimage-builder/elements/manifests, >2018-06-26 11:09:43,090 INFO: os-apply-config: /usr/share/tripleo-image-elements/os-apply-config, os-refresh-config: /usr/share/tripleo-image-elements/os-refresh-config, >2018-06-26 11:09:43,090 INFO: package-installs: /usr/share/diskimage-builder/elements/package-installs, pip-and-virtualenv: /usr/share/diskimage-builder/elements/pip-and-virtualenv, >2018-06-26 11:09:43,090 INFO: pip-manifest: /usr/share/tripleo-image-elements/pip-manifest, pkg-map: /usr/share/diskimage-builder/elements/pkg-map, >2018-06-26 11:09:43,090 INFO: puppet: /usr/share/tripleo-puppet-elements/puppet, puppet-modules: /usr/share/tripleo-puppet-elements/puppet-modules, >2018-06-26 11:09:43,090 INFO: puppet-stack-config: /usr/share/instack-undercloud/puppet-stack-config, source-repositories: /usr/share/diskimage-builder/elements/source-repositories, >2018-06-26 11:09:43,091 INFO: undercloud-install: /usr/share/instack-undercloud/undercloud-install} >2018-06-26 11:09:43,091 INFO: ' >2018-06-26 11:09:43,091 INFO: ++ export -f get_image_element_array >2018-06-26 11:09:43,091 INFO: + set +o xtrace >2018-06-26 11:09:43,091 INFO: dib-run-parts Sourcing environment file /tmp/tmpDqgrg7/install.d/../environment.d/01-export-install-types.bash >2018-06-26 11:09:43,091 INFO: + source /tmp/tmpDqgrg7/install.d/../environment.d/01-export-install-types.bash >2018-06-26 11:09:43,091 INFO: ++ export DIB_DEFAULT_INSTALLTYPE=package >2018-06-26 11:09:43,091 INFO: ++ DIB_DEFAULT_INSTALLTYPE=package >2018-06-26 11:09:43,091 INFO: + set +o xtrace >2018-06-26 11:09:43,092 INFO: dib-run-parts Sourcing environment file /tmp/tmpDqgrg7/install.d/../environment.d/01-puppet-module-pins.sh >2018-06-26 11:09:43,092 INFO: + source /tmp/tmpDqgrg7/install.d/../environment.d/01-puppet-module-pins.sh >2018-06-26 11:09:43,092 INFO: ++ export DIB_REPOREF_puppetlabs_ntp=4.2.x >2018-06-26 11:09:43,092 INFO: ++ DIB_REPOREF_puppetlabs_ntp=4.2.x >2018-06-26 11:09:43,092 INFO: + set +o xtrace >2018-06-26 11:09:43,092 INFO: dib-run-parts Sourcing environment file /tmp/tmpDqgrg7/install.d/../environment.d/02-puppet-modules-install-types.sh >2018-06-26 11:09:43,093 INFO: + source /tmp/tmpDqgrg7/install.d/../environment.d/02-puppet-modules-install-types.sh >2018-06-26 11:09:43,093 INFO: ++ DIB_DEFAULT_INSTALLTYPE=package >2018-06-26 11:09:43,093 INFO: ++ DIB_INSTALLTYPE_puppet_modules=package >2018-06-26 11:09:43,093 INFO: ++ '[' package = source ']' >2018-06-26 11:09:43,093 INFO: + set +o xtrace >2018-06-26 11:09:43,094 INFO: dib-run-parts Sourcing environment file /tmp/tmpDqgrg7/install.d/../environment.d/10-os-apply-config-venv-dir.bash >2018-06-26 11:09:43,095 INFO: + source /tmp/tmpDqgrg7/install.d/../environment.d/10-os-apply-config-venv-dir.bash >2018-06-26 11:09:43,095 INFO: ++ '[' -z '' ']' >2018-06-26 11:09:43,096 INFO: ++ export OS_APPLY_CONFIG_VENV_DIR=/opt/stack/venvs/os-apply-config >2018-06-26 11:09:43,096 INFO: ++ OS_APPLY_CONFIG_VENV_DIR=/opt/stack/venvs/os-apply-config >2018-06-26 11:09:43,096 INFO: + set +o xtrace >2018-06-26 11:09:43,096 INFO: dib-run-parts Sourcing environment file /tmp/tmpDqgrg7/install.d/../environment.d/14-manifests >2018-06-26 11:09:43,097 INFO: + source /tmp/tmpDqgrg7/install.d/../environment.d/14-manifests >2018-06-26 11:09:43,098 INFO: ++ export DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests >2018-06-26 11:09:43,098 INFO: ++ DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests >2018-06-26 11:09:43,098 INFO: ++ export DIB_MANIFEST_SAVE_DIR=instack.d/ >2018-06-26 11:09:43,098 INFO: ++ DIB_MANIFEST_SAVE_DIR=instack.d/ >2018-06-26 11:09:43,098 INFO: + set +o xtrace >2018-06-26 11:09:43,098 INFO: dib-run-parts Running /tmp/tmpDqgrg7/install.d/02-puppet-stack-config >2018-06-26 11:09:43,792 INFO: dib-run-parts 02-puppet-stack-config completed >2018-06-26 11:09:43,793 INFO: dib-run-parts Running /tmp/tmpDqgrg7/install.d/10-hiera-yaml-symlink >2018-06-26 11:09:43,795 INFO: + set -o pipefail >2018-06-26 11:09:43,795 INFO: + ln -f -s /etc/puppet/hiera.yaml /etc/hiera.yaml >2018-06-26 11:09:43,798 INFO: dib-run-parts 10-hiera-yaml-symlink completed >2018-06-26 11:09:43,798 INFO: dib-run-parts Running /tmp/tmpDqgrg7/install.d/10-puppet-stack-config-puppet-module >2018-06-26 11:09:43,801 INFO: + set -o pipefail >2018-06-26 11:09:43,801 INFO: + mkdir -p /etc/puppet/manifests >2018-06-26 11:09:43,803 INFO: ++ dirname /tmp/tmpDqgrg7/install.d/10-puppet-stack-config-puppet-module >2018-06-26 11:09:43,803 INFO: + cp /tmp/tmpDqgrg7/install.d/../puppet-stack-config.pp /etc/puppet/manifests/puppet-stack-config.pp >2018-06-26 11:09:43,806 INFO: dib-run-parts 10-puppet-stack-config-puppet-module completed >2018-06-26 11:09:43,806 INFO: dib-run-parts Running /tmp/tmpDqgrg7/install.d/11-create-template-root >2018-06-26 11:09:43,810 INFO: ++ os-apply-config --print-templates >2018-06-26 11:09:43,963 INFO: + TEMPLATE_ROOT=/usr/libexec/os-apply-config/templates >2018-06-26 11:09:43,963 INFO: + mkdir -p /usr/libexec/os-apply-config/templates >2018-06-26 11:09:43,965 INFO: dib-run-parts 11-create-template-root completed >2018-06-26 11:09:43,965 INFO: dib-run-parts Running /tmp/tmpDqgrg7/install.d/11-hiera-orc-install >2018-06-26 11:09:43,968 INFO: + set -o pipefail >2018-06-26 11:09:43,968 INFO: + mkdir -p /usr/libexec/os-refresh-config/configure.d/ >2018-06-26 11:09:43,970 INFO: ++ dirname /tmp/tmpDqgrg7/install.d/11-hiera-orc-install >2018-06-26 11:09:43,970 INFO: + install -m 0755 -o root -g root /tmp/tmpDqgrg7/install.d/../10-hiera-disable /usr/libexec/os-refresh-config/configure.d/10-hiera-disable >2018-06-26 11:09:43,976 INFO: ++ dirname /tmp/tmpDqgrg7/install.d/11-hiera-orc-install >2018-06-26 11:09:43,977 INFO: + install -m 0755 -o root -g root /tmp/tmpDqgrg7/install.d/../40-hiera-datafiles /usr/libexec/os-refresh-config/configure.d/40-hiera-datafiles >2018-06-26 11:09:43,982 INFO: dib-run-parts 11-hiera-orc-install completed >2018-06-26 11:09:43,983 INFO: dib-run-parts Running /tmp/tmpDqgrg7/install.d/75-puppet-modules-package >2018-06-26 11:09:43,985 INFO: + find /opt/stack/puppet-modules/ -mindepth 1 >2018-06-26 11:09:43,986 INFO: + read >2018-06-26 11:09:43,990 INFO: + ln -f -s /usr/share/openstack-puppet/modules/aodh /usr/share/openstack-puppet/modules/apache /usr/share/openstack-puppet/modules/archive /usr/share/openstack-puppet/modules/auditd /usr/share/openstack-puppet/modules/barbican /usr/share/openstack-puppet/modules/cassandra /usr/share/openstack-puppet/modules/ceilometer /usr/share/openstack-puppet/modules/ceph /usr/share/openstack-puppet/modules/certmonger /usr/share/openstack-puppet/modules/cinder /usr/share/openstack-puppet/modules/collectd /usr/share/openstack-puppet/modules/concat /usr/share/openstack-puppet/modules/contrail /usr/share/openstack-puppet/modules/corosync /usr/share/openstack-puppet/modules/datacat /usr/share/openstack-puppet/modules/designate /usr/share/openstack-puppet/modules/dns /usr/share/openstack-puppet/modules/ec2api /usr/share/openstack-puppet/modules/elasticsearch /usr/share/openstack-puppet/modules/fdio /usr/share/openstack-puppet/modules/firewall /usr/share/openstack-puppet/modules/fluentd /usr/share/openstack-puppet/modules/git /usr/share/openstack-puppet/modules/glance /usr/share/openstack-puppet/modules/gnocchi /usr/share/openstack-puppet/modules/haproxy /usr/share/openstack-puppet/modules/heat /usr/share/openstack-puppet/modules/horizon /usr/share/openstack-puppet/modules/inifile /usr/share/openstack-puppet/modules/ipaclient /usr/share/openstack-puppet/modules/ironic /usr/share/openstack-puppet/modules/java /usr/share/openstack-puppet/modules/kafka /usr/share/openstack-puppet/modules/keepalived /usr/share/openstack-puppet/modules/keystone /usr/share/openstack-puppet/modules/kibana3 /usr/share/openstack-puppet/modules/kmod /usr/share/openstack-puppet/modules/manila /usr/share/openstack-puppet/modules/memcached /usr/share/openstack-puppet/modules/midonet /usr/share/openstack-puppet/modules/mistral /usr/share/openstack-puppet/modules/module-data /usr/share/openstack-puppet/modules/mysql /usr/share/openstack-puppet/modules/n1k_vsm /usr/share/openstack-puppet/modules/neutron /usr/share/openstack-puppet/modules/nova /usr/share/openstack-puppet/modules/nssdb /usr/share/openstack-puppet/modules/ntp /usr/share/openstack-puppet/modules/octavia /usr/share/openstack-puppet/modules/opendaylight /usr/share/openstack-puppet/modules/openstack_extras /usr/share/openstack-puppet/modules/openstacklib /usr/share/openstack-puppet/modules/oslo /usr/share/openstack-puppet/modules/ovn /usr/share/openstack-puppet/modules/pacemaker /usr/share/openstack-puppet/modules/panko /usr/share/openstack-puppet/modules/rabbitmq /usr/share/openstack-puppet/modules/redis /usr/share/openstack-puppet/modules/remote /usr/share/openstack-puppet/modules/rsync /usr/share/openstack-puppet/modules/sahara /usr/share/openstack-puppet/modules/sensu /usr/share/openstack-puppet/modules/snmp /usr/share/openstack-puppet/modules/ssh /usr/share/openstack-puppet/modules/staging /usr/share/openstack-puppet/modules/stdlib /usr/share/openstack-puppet/modules/swift /usr/share/openstack-puppet/modules/sysctl /usr/share/openstack-puppet/modules/systemd /usr/share/openstack-puppet/modules/timezone /usr/share/openstack-puppet/modules/tomcat /usr/share/openstack-puppet/modules/tripleo /usr/share/openstack-puppet/modules/trove /usr/share/openstack-puppet/modules/uchiwa /usr/share/openstack-puppet/modules/vcsrepo /usr/share/openstack-puppet/modules/veritas_hyperscale /usr/share/openstack-puppet/modules/vswitch /usr/share/openstack-puppet/modules/xinetd /usr/share/openstack-puppet/modules/zaqar /usr/share/openstack-puppet/modules/zookeeper /etc/puppet/modules/ >2018-06-26 11:09:43,992 INFO: dib-run-parts 75-puppet-modules-package completed >2018-06-26 11:09:43,992 INFO: dib-run-parts Running /tmp/tmpDqgrg7/install.d/99-install-config-templates >2018-06-26 11:09:43,995 INFO: ++ os-apply-config --print-templates >2018-06-26 11:09:44,143 INFO: + TEMPLATE_ROOT=/usr/libexec/os-apply-config/templates >2018-06-26 11:09:44,143 INFO: ++ dirname /tmp/tmpDqgrg7/install.d/99-install-config-templates >2018-06-26 11:09:44,144 INFO: + TEMPLATE_SOURCE=/tmp/tmpDqgrg7/install.d/../os-apply-config >2018-06-26 11:09:44,144 INFO: + mkdir -p /usr/libexec/os-apply-config/templates >2018-06-26 11:09:44,145 INFO: + '[' -d /tmp/tmpDqgrg7/install.d/../os-apply-config ']' >2018-06-26 11:09:44,145 INFO: + rsync '--exclude=.*.swp' -Cr /tmp/tmpDqgrg7/install.d/../os-apply-config/ /usr/libexec/os-apply-config/templates/ >2018-06-26 11:09:44,150 INFO: dib-run-parts 99-install-config-templates completed >2018-06-26 11:09:44,150 INFO: dib-run-parts Running /tmp/tmpDqgrg7/install.d/99-os-refresh-config-install-scripts >2018-06-26 11:09:44,153 INFO: ++ os-refresh-config --print-base >2018-06-26 11:09:44,199 INFO: + SCRIPT_BASE=/usr/libexec/os-refresh-config >2018-06-26 11:09:44,200 INFO: ++ dirname /tmp/tmpDqgrg7/install.d/99-os-refresh-config-install-scripts >2018-06-26 11:09:44,200 INFO: + SCRIPT_SOURCE=/tmp/tmpDqgrg7/install.d/../os-refresh-config >2018-06-26 11:09:44,200 INFO: + rsync -r /tmp/tmpDqgrg7/install.d/../os-refresh-config/ /usr/libexec/os-refresh-config/ >2018-06-26 11:09:44,205 INFO: dib-run-parts 99-os-refresh-config-install-scripts completed >2018-06-26 11:09:44,205 INFO: dib-run-parts ----------------------- PROFILING ----------------------- >2018-06-26 11:09:44,205 INFO: dib-run-parts >2018-06-26 11:09:44,206 INFO: dib-run-parts Target: install.d >2018-06-26 11:09:44,206 INFO: dib-run-parts >2018-06-26 11:09:44,207 INFO: dib-run-parts Script Seconds >2018-06-26 11:09:44,207 INFO: dib-run-parts --------------------------------------- ---------- >2018-06-26 11:09:44,207 INFO: dib-run-parts >2018-06-26 11:09:44,213 INFO: dib-run-parts 02-puppet-stack-config 0.693 >2018-06-26 11:09:44,217 INFO: dib-run-parts 10-hiera-yaml-symlink 0.005 >2018-06-26 11:09:44,221 INFO: dib-run-parts 10-puppet-stack-config-puppet-module 0.007 >2018-06-26 11:09:44,226 INFO: dib-run-parts 11-create-template-root 0.157 >2018-06-26 11:09:44,230 INFO: dib-run-parts 11-hiera-orc-install 0.016 >2018-06-26 11:09:44,234 INFO: dib-run-parts 75-puppet-modules-package 0.008 >2018-06-26 11:09:44,239 INFO: dib-run-parts 99-install-config-templates 0.158 >2018-06-26 11:09:44,243 INFO: dib-run-parts 99-os-refresh-config-install-scripts 0.053 >2018-06-26 11:09:44,244 INFO: dib-run-parts >2018-06-26 11:09:44,244 INFO: dib-run-parts --------------------- END PROFILING --------------------- >2018-06-26 11:09:44,245 INFO: INFO: 2018-06-26 11:09:44,245 -- ############### End stdout/stderr logging ############### >2018-06-26 11:09:44,245 INFO: INFO: 2018-06-26 11:09:44,245 -- Running hook post-install >2018-06-26 11:09:44,245 INFO: INFO: 2018-06-26 11:09:44,245 -- Skipping hook post-install, the hook directory doesn't exist at /tmp/tmpDqgrg7/post-install.d >2018-06-26 11:09:44,247 INFO: INFO: 2018-06-26 11:09:44,247 -- Ending run of instack. >2018-06-26 11:09:44,258 INFO: Instack completed successfully >2018-06-26 11:09:44,258 INFO: Running os-refresh-config >2018-06-26 11:09:44,315 INFO: [2018-06-26 11:09:44,315] (os-refresh-config) [INFO] Starting phase configure >2018-06-26 11:09:44,324 INFO: dib-run-parts Tue Jun 26 11:09:44 IST 2018 Running /usr/libexec/os-refresh-config/configure.d/10-hiera-disable >2018-06-26 11:09:44,326 INFO: + '[' -f /etc/puppet/hiera.yaml ']' >2018-06-26 11:09:44,326 INFO: + grep yaml /etc/puppet/hiera.yaml >2018-06-26 11:09:44,329 INFO: dib-run-parts Tue Jun 26 11:09:44 IST 2018 10-hiera-disable completed >2018-06-26 11:09:44,330 INFO: dib-run-parts Tue Jun 26 11:09:44 IST 2018 Running /usr/libexec/os-refresh-config/configure.d/20-os-apply-config >2018-06-26 11:09:44,478 INFO: [2018/06/26 11:09:44 AM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json >2018-06-26 11:09:44,482 INFO: [2018/06/26 11:09:44 AM] [INFO] writing /etc/os-net-config/config.json >2018-06-26 11:09:44,483 INFO: [2018/06/26 11:09:44 AM] [INFO] writing /root/stackrc >2018-06-26 11:09:44,483 INFO: [2018/06/26 11:09:44 AM] [INFO] writing /var/run/heat-config/heat-config >2018-06-26 11:09:44,484 INFO: [2018/06/26 11:09:44 AM] [INFO] writing /etc/puppet/hiera.yaml >2018-06-26 11:09:44,485 INFO: [2018/06/26 11:09:44 AM] [INFO] writing /var/opt/undercloud-stack/masquerade >2018-06-26 11:09:44,485 INFO: [2018/06/26 11:09:44 AM] [INFO] writing /etc/puppet/hieradata/RedHat.yaml >2018-06-26 11:09:44,485 INFO: [2018/06/26 11:09:44 AM] [INFO] writing /etc/puppet/hieradata/CentOS.yaml >2018-06-26 11:09:44,486 INFO: [2018/06/26 11:09:44 AM] [INFO] writing /root/tripleo-undercloud-passwords >2018-06-26 11:09:44,486 INFO: [2018/06/26 11:09:44 AM] [INFO] writing /etc/os-collect-config.conf >2018-06-26 11:09:44,486 INFO: [2018/06/26 11:09:44 AM] [INFO] success >2018-06-26 11:09:44,493 INFO: dib-run-parts Tue Jun 26 11:09:44 IST 2018 20-os-apply-config completed >2018-06-26 11:09:44,494 INFO: dib-run-parts Tue Jun 26 11:09:44 IST 2018 Running /usr/libexec/os-refresh-config/configure.d/30-reload-keepalived >2018-06-26 11:09:44,496 INFO: + systemctl is-enabled keepalived >2018-06-26 11:09:44,505 INFO: disabled >2018-06-26 11:09:44,508 INFO: dib-run-parts Tue Jun 26 11:09:44 IST 2018 30-reload-keepalived completed >2018-06-26 11:09:44,509 INFO: dib-run-parts Tue Jun 26 11:09:44 IST 2018 Running /usr/libexec/os-refresh-config/configure.d/40-hiera-datafiles >2018-06-26 11:09:44,656 INFO: [2018/06/26 11:09:44 AM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json >2018-06-26 11:09:44,668 INFO: dib-run-parts Tue Jun 26 11:09:44 IST 2018 40-hiera-datafiles completed >2018-06-26 11:09:44,668 INFO: dib-run-parts Tue Jun 26 11:09:44 IST 2018 Running /usr/libexec/os-refresh-config/configure.d/50-puppet-stack-config >2018-06-26 11:09:44,671 INFO: + set -o pipefail >2018-06-26 11:09:44,671 INFO: + puppet_apply puppet apply --summarize --detailed-exitcodes /etc/puppet/manifests/puppet-stack-config.pp >2018-06-26 11:09:44,671 INFO: + set +e >2018-06-26 11:09:44,671 INFO: + puppet apply --summarize --detailed-exitcodes /etc/puppet/manifests/puppet-stack-config.pp >2018-06-26 11:09:49,973 INFO: [mNotice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend[0m >2018-06-26 11:09:50,066 INFO: [1;33mWarning: ModuleLoader: module 'openstacklib' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 11:09:50,066 INFO: (file & line not available)[0m >2018-06-26 11:09:50,299 INFO: [mNotice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend[0m >2018-06-26 11:09:50,354 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 11:09:50,354 INFO: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 54]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-06-26 11:09:50,354 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 11:09:50,357 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 11:09:50,357 INFO: with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 55]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-06-26 11:09:50,358 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 11:09:50,430 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 11:09:50,430 INFO: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 56]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-06-26 11:09:50,430 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 11:09:50,443 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 11:09:50,444 INFO: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 66]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-06-26 11:09:50,444 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 11:09:50,446 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 11:09:50,447 INFO: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 68]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-06-26 11:09:50,447 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 11:09:50,459 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 11:09:50,460 INFO: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/ntp/manifests/init.pp", 89]:["/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp", 29] >2018-06-26 11:09:50,460 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 11:09:50,725 INFO: [1;33mWarning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/rabbitmq/manifests/install/rabbitmqadmin.pp", 37]:["/etc/puppet/modules/rabbitmq/manifests/init.pp", 316] >2018-06-26 11:09:50,725 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 11:09:50,861 INFO: [mNotice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.[0m >2018-06-26 11:09:51,060 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 11:09:51,060 INFO: with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp", 97]:["/etc/puppet/manifests/puppet-stack-config.pp", 91] >2018-06-26 11:09:51,060 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 11:09:51,097 INFO: [1;33mWarning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 11:09:51,097 INFO: (file & line not available)[0m >2018-06-26 11:09:51,325 INFO: [1;33mWarning: ModuleLoader: module 'keystone' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 11:09:51,325 INFO: (file & line not available)[0m >2018-06-26 11:09:51,730 INFO: [1;33mWarning: ModuleLoader: module 'glance' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 11:09:51,730 INFO: (file & line not available)[0m >2018-06-26 11:09:51,918 INFO: [1;33mWarning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 11:09:51,918 INFO: (file & line not available)[0m >2018-06-26 11:09:52,058 INFO: [1;33mWarning: Unknown variable: '::nova::db::mysql_api::setup_cell0'. at /etc/puppet/modules/nova/manifests/db/mysql.pp:53:28[0m >2018-06-26 11:09:52,088 INFO: [1;33mWarning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 11:09:52,088 INFO: (file & line not available)[0m >2018-06-26 11:09:52,627 INFO: [1;33mWarning: ModuleLoader: module 'heat' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 11:09:52,628 INFO: (file & line not available)[0m >2018-06-26 11:09:52,674 INFO: [1;33mWarning: ModuleLoader: module 'ironic' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 11:09:52,674 INFO: (file & line not available)[0m >2018-06-26 11:09:52,789 INFO: [1;33mWarning: ModuleLoader: module 'swift' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 11:09:52,789 INFO: (file & line not available)[0m >2018-06-26 11:09:53,043 INFO: [1;33mWarning: Scope(Class[Keystone]): keystone::rabbit_host, keystone::rabbit_hosts, keystone::rabbit_password, keystone::rabbit_port, keystone::rabbit_userid and keystone::rabbit_virtual_host are deprecated. Please use keystone::default_transport_url instead.[0m >2018-06-26 11:09:54,251 INFO: [1;33mWarning: Scope(Class[Glance::Notify::Rabbitmq]): glance::notify::rabbitmq::rabbit_host, glance::notify::rabbitmq::rabbit_hosts, glance::notify::rabbitmq::rabbit_password, glance::notify::rabbitmq::rabbit_port, glance::notify::rabbitmq::rabbit_userid and glance::notify::rabbitmq::rabbit_virtual_host are deprecated. Please use glance::notify::rabbitmq::default_transport_url instead.[0m >2018-06-26 11:09:54,322 INFO: [1;33mWarning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release[0m >2018-06-26 11:09:54,322 INFO: [1;33mWarning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release[0m >2018-06-26 11:09:54,556 INFO: [1;33mWarning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 11:09:54,556 INFO: (file & line not available)[0m >2018-06-26 11:09:54,843 INFO: [1;33mWarning: Unknown variable: 'until_complete_real'. at /etc/puppet/modules/nova/manifests/cron/archive_deleted_rows.pp:77:82[0m >2018-06-26 11:09:54,877 INFO: [1;33mWarning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/nova/manifests/scheduler/filter.pp", 140]:["/etc/puppet/manifests/puppet-stack-config.pp", 389] >2018-06-26 11:09:54,877 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 11:09:55,001 INFO: [1;33mWarning: Scope(Class[Neutron]): neutron::rabbit_host, neutron::rabbit_hosts, neutron::rabbit_password, neutron::rabbit_port, neutron::rabbit_user, neutron::rabbit_virtual_host and neutron::rpc_backend are deprecated. Please use neutron::default_transport_url instead.[0m >2018-06-26 11:09:55,866 INFO: [1;33mWarning: Unknown variable: 'methods_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:100:56[0m >2018-06-26 11:09:55,866 INFO: [1;33mWarning: Unknown variable: 'incoming_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:101:56[0m >2018-06-26 11:09:55,866 INFO: [1;33mWarning: Unknown variable: 'incoming_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:102:56[0m >2018-06-26 11:09:55,867 INFO: [1;33mWarning: Unknown variable: 'outgoing_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:103:56[0m >2018-06-26 11:09:55,867 INFO: [1;33mWarning: Unknown variable: 'outgoing_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:104:56[0m >2018-06-26 11:09:55,923 INFO: [1;33mWarning: Scope(Class[Swift::Storage::All]): The default port for the object storage server has changed from 6000 to 6200 and will be changed in a later release[0m >2018-06-26 11:09:55,923 INFO: [1;33mWarning: Scope(Class[Swift::Storage::All]): The default port for the container storage server has changed from 6001 to 6201 and will be changed in a later release[0m >2018-06-26 11:09:55,924 INFO: [1;33mWarning: Scope(Class[Swift::Storage::All]): The default port for the account storage server has changed from 6002 to 6202 and will be changed in a later release[0m >2018-06-26 11:09:56,331 INFO: [1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, >2018-06-26 11:09:56,332 INFO: with Stdlib::Compat::Integer. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/heat/manifests/wsgi/apache_api_cfn.pp", 125]:["/etc/puppet/manifests/puppet-stack-config.pp", 510] >2018-06-26 11:09:56,332 INFO: (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')[0m >2018-06-26 11:09:56,654 INFO: [1;33mWarning: Unknown variable: '::ironic::conductor::swift_account'. at /etc/puppet/modules/ironic/manifests/glance.pp:117:30[0m >2018-06-26 11:09:56,654 INFO: [1;33mWarning: Unknown variable: '::ironic::conductor::swift_temp_url_key'. at /etc/puppet/modules/ironic/manifests/glance.pp:118:35[0m >2018-06-26 11:09:56,655 INFO: [1;33mWarning: Unknown variable: '::ironic::conductor::swift_temp_url_duration'. at /etc/puppet/modules/ironic/manifests/glance.pp:119:40[0m >2018-06-26 11:09:56,675 INFO: [1;33mWarning: Unknown variable: '::ironic::api::neutron_url'. at /etc/puppet/modules/ironic/manifests/neutron.pp:58:29[0m >2018-06-26 11:09:57,414 INFO: [1;33mWarning: ModuleLoader: module 'mistral' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 11:09:57,414 INFO: (file & line not available)[0m >2018-06-26 11:09:57,458 INFO: [1;33mWarning: Unknown variable: '::mistral::database_idle_timeout'. at /etc/puppet/modules/mistral/manifests/db.pp:57:40[0m >2018-06-26 11:09:57,459 INFO: [1;33mWarning: Unknown variable: '::mistral::database_min_pool_size'. at /etc/puppet/modules/mistral/manifests/db.pp:58:40[0m >2018-06-26 11:09:57,459 INFO: [1;33mWarning: Unknown variable: '::mistral::database_max_pool_size'. at /etc/puppet/modules/mistral/manifests/db.pp:59:40[0m >2018-06-26 11:09:57,460 INFO: [1;33mWarning: Unknown variable: '::mistral::database_max_retries'. at /etc/puppet/modules/mistral/manifests/db.pp:60:40[0m >2018-06-26 11:09:57,460 INFO: [1;33mWarning: Unknown variable: '::mistral::database_retry_interval'. at /etc/puppet/modules/mistral/manifests/db.pp:61:40[0m >2018-06-26 11:09:57,461 INFO: [1;33mWarning: Unknown variable: '::mistral::database_max_overflow'. at /etc/puppet/modules/mistral/manifests/db.pp:62:40[0m >2018-06-26 11:09:57,505 INFO: [1;33mWarning: Scope(Class[Mistral]): mistral::rabbit_host, mistral::rabbit_hosts, mistral::rabbit_password, mistral::rabbit_port, mistral::rabbit_userid, mistral::rabbit_virtual_host and mistral::rpc_backend are deprecated. Please use mistral::default_transport_url instead.[0m >2018-06-26 11:09:57,688 INFO: [1;33mWarning: ModuleLoader: module 'zaqar' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 11:09:57,688 INFO: (file & line not available)[0m >2018-06-26 11:09:58,546 INFO: [1;33mWarning: ModuleLoader: module 'oslo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules >2018-06-26 11:09:58,546 INFO: (file & line not available)[0m >2018-06-26 11:09:58,640 INFO: [1;33mWarning: Scope(Oslo::Messaging::Rabbit[keystone_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead.[0m >2018-06-26 11:09:59,385 INFO: [1;33mWarning: Scope(Oslo::Messaging::Rabbit[glance_api_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead.[0m >2018-06-26 11:09:59,395 INFO: [1;33mWarning: Scope(Oslo::Messaging::Rabbit[glance_registry_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead.[0m >2018-06-26 11:09:59,583 INFO: [1;33mWarning: Scope(Oslo::Messaging::Rabbit[neutron_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead.[0m >2018-06-26 11:09:59,627 INFO: [1;33mWarning: Scope(Neutron::Plugins::Ml2::Type_driver[local]): local type_driver is useful only for single-box, because it provides no connectivity between hosts[0m >2018-06-26 11:10:00,087 INFO: [1;33mWarning: Scope(Oslo::Messaging::Rabbit[mistral_config]): The oslo_messaging rabbit_host, rabbit_hosts, rabbit_port, rabbit_userid, rabbit_password, rabbit_virtual_host parameters have been deprecated by the [DEFAULT]\transport_url. Please use oslo::messaging::default::transport_url instead.[0m >2018-06-26 11:10:03,605 INFO: [mNotice: Compiled catalog for facebook.local.com in environment production in 13.82 seconds[0m >2018-06-26 11:10:11,581 INFO: [mNotice: /Stage[setup]/Tripleo::Network::Os_net_config/Exec[os-net-config]/returns: executed successfully[0m >2018-06-26 11:10:11,601 INFO: [mNotice: /Stage[setup]/Tripleo::Network::Os_net_config/Exec[trigger-keepalived-restart]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:10:18,682 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Pxe/Ironic_config[pxe/ipxe_enabled]/value: value changed 'False' to 'True'[0m >2018-06-26 11:10:18,706 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Pxe/Ironic_config[pxe/pxe_bootfile_name]/value: value changed 'pxelinux.0' to 'undionly.kpxe'[0m >2018-06-26 11:10:18,721 INFO: [mNotice: /Stage[main]/Ironic::Drivers::Pxe/Ironic_config[pxe/pxe_config_template]/value: value changed '$pybasedir/drivers/modules/pxe_config.template' to '$pybasedir/drivers/modules/ipxe_config.template'[0m >2018-06-26 11:10:19,063 INFO: [mNotice: /Stage[main]/Ironic::Inspector/File[/etc/ironic-inspector/dnsmasq.conf]/content: content changed '{md5}1aa5d8d4ff7a17016e4f4afa2ac0f621' to '{md5}96c209a9c0301fca1fec11f4204382f0'[0m >2018-06-26 11:10:19,152 INFO: [mNotice: /Stage[main]/Ironic::Inspector/File[/httpboot/inspector.ipxe]/ensure: defined content as '{md5}049edb896ee61efada7a41ca48e50858'[0m >2018-06-26 11:10:49,149 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::config::end]: Triggered 'refresh' from 3 events[0m >2018-06-26 11:10:52,411 INFO: [mNotice: /Stage[main]/Glance::Db::Sync/Exec[glance-manage db_sync]/returns: executed successfully[0m >2018-06-26 11:10:52,412 INFO: [mNotice: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:10:53,490 INFO: [mNotice: /Stage[main]/Glance::Db::Metadefs/Exec[glance-manage db_load_metadefs]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:10:56,338 INFO: [mNotice: /Stage[main]/Nova::Db::Sync_api/Exec[nova-db-sync-api]/returns: executed successfully[0m >2018-06-26 11:10:56,339 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:10:56,339 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:10:56,340 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:10:59,399 INFO: [mNotice: /Stage[main]/Nova::Cell_v2::Map_cell0/Exec[nova-cell_v2-map_cell0]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:10:59,400 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:10:59,401 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]: Triggered 'refresh' from 2 events[0m >2018-06-26 11:11:02,274 INFO: [mNotice: /Stage[main]/Nova::Db::Sync/Exec[nova-db-sync]/returns: executed successfully[0m >2018-06-26 11:11:05,187 INFO: [mNotice: /Stage[main]/Nova::Db::Sync/Exec[nova-db-sync]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:05,187 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::end]: Triggered 'refresh' from 2 events[0m >2018-06-26 11:11:05,188 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::service::begin]: Triggered 'refresh' from 2 events[0m >2018-06-26 11:11:10,863 INFO: [mNotice: /Stage[main]/Nova::Conductor/Nova::Generic_service[conductor]/Service[nova-conductor]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:16,562 INFO: [mNotice: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Service[nova-scheduler]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:17,826 INFO: [mNotice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: executed successfully[0m >2018-06-26 11:11:17,827 INFO: [mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:17,828 INFO: [mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::service::begin]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:19,208 INFO: [mNotice: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:19,338 INFO: [mNotice: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:21,510 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:22,804 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-destroy-patch-ports-service]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:23,789 INFO: [mNotice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: executed successfully[0m >2018-06-26 11:11:23,791 INFO: [mNotice: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:23,792 INFO: [mNotice: /Stage[main]/Heat::Deps/Anchor[heat::service::begin]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:23,897 INFO: [mNotice: /Stage[main]/Heat::Api/Service[heat-api]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:23,993 INFO: [mNotice: /Stage[main]/Heat::Api_cfn/Service[heat-api-cfn]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:24,268 INFO: [mNotice: /Stage[main]/Heat::Engine/Service[heat-engine]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:25,291 INFO: [mNotice: /Stage[main]/Ironic::Db::Sync/Exec[ironic-dbsync]/returns: executed successfully[0m >2018-06-26 11:11:26,333 INFO: [mNotice: /Stage[main]/Ironic::Db::Sync/Exec[ironic-dbsync]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:26,334 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::dbsync::end]: Triggered 'refresh' from 2 events[0m >2018-06-26 11:11:26,335 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::db_online_data_migrations::begin]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:28,834 INFO: [mNotice: /Stage[main]/Ironic::Db::Online_data_migrations/Exec[ironic-db-online-data-migrations]/returns: executed successfully[0m >2018-06-26 11:11:31,511 INFO: [mNotice: /Stage[main]/Ironic::Db::Online_data_migrations/Exec[ironic-db-online-data-migrations]: Triggered 'refresh' from 3 events[0m >2018-06-26 11:11:31,512 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::db_online_data_migrations::end]: Triggered 'refresh' from 2 events[0m >2018-06-26 11:11:31,513 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::service::begin]: Triggered 'refresh' from 2 events[0m >2018-06-26 11:11:31,614 INFO: [mNotice: /Stage[main]/Ironic::Api/Service[ironic-api]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:38,539 INFO: [mNotice: /Stage[main]/Ironic::Conductor/Service[ironic-conductor]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:38,541 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic::service::end]: Triggered 'refresh' from 2 events[0m >2018-06-26 11:11:41,696 INFO: [mNotice: /Stage[main]/Mistral::Db::Sync/Exec[mistral-db-sync]/returns: executed successfully[0m >2018-06-26 11:11:43,845 INFO: [mNotice: /Stage[main]/Keystone::Db::Sync/Exec[keystone-manage db_sync]/returns: executed successfully[0m >2018-06-26 11:11:43,847 INFO: [mNotice: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:45,413 INFO: [mNotice: /Stage[main]/Keystone/Exec[keystone-manage bootstrap]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:11:45,414 INFO: [mNotice: /Stage[main]/Keystone::Deps/Anchor[keystone::service::begin]: Triggered 'refresh' from 2 events[0m >2018-06-26 11:12:11,747 INFO: [mNotice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Triggered 'refresh' from 3 events[0m >2018-06-26 11:12:56,470 INFO: [mNotice: /Stage[main]/Neutron::Server/Service[neutron-server]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:13:01,156 INFO: [mNotice: /Stage[main]/Glance::Deps/Anchor[glance::service::begin]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:13:27,441 INFO: [mNotice: /Stage[main]/Nova::Api/Nova::Generic_service[api]/Service[nova-api]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:13:42,422 INFO: [mNotice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Service[ironic-neutron-agent-service]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:13:42,424 INFO: [mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::service::end]: Triggered 'refresh' from 6 events[0m >2018-06-26 11:13:42,621 INFO: [mNotice: /Stage[main]/Ironic::Inspector/Service[ironic-inspector-dnsmasq]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:13:42,623 INFO: [mNotice: /Stage[main]/Ironic::Deps/Anchor[ironic-inspector::service::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:13:51,071 INFO: [mNotice: /Stage[main]/Mistral::Db::Sync/Exec[mistral-db-populate]/returns: executed successfully[0m >2018-06-26 11:13:59,809 INFO: [mNotice: /Stage[main]/Mistral::Db::Sync/Exec[mistral-db-populate]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:13:59,810 INFO: [mNotice: /Stage[main]/Mistral::Deps/Anchor[mistral::dbsync::end]: Triggered 'refresh' from 3 events[0m >2018-06-26 11:13:59,811 INFO: [mNotice: /Stage[main]/Mistral::Deps/Anchor[mistral::service::begin]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:14:00,079 INFO: [mNotice: /Stage[main]/Mistral::Api/Service[mistral-api]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:14:03,679 INFO: [mNotice: /Stage[main]/Mistral::Engine/Service[mistral-engine]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:14:04,195 INFO: [mNotice: /Stage[main]/Mistral::Executor/Service[mistral-executor]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:14:04,196 INFO: [mNotice: /Stage[main]/Mistral::Deps/Anchor[mistral::service::end]: Triggered 'refresh' from 3 events[0m >2018-06-26 11:14:11,869 INFO: [mNotice: /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Service[nova-compute]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:14:11,871 INFO: [mNotice: /Stage[main]/Nova::Deps/Anchor[nova::service::end]: Triggered 'refresh' from 4 events[0m >2018-06-26 11:14:14,908 INFO: [mNotice: /Stage[main]/Nova::Cell_v2::Discover_hosts/Exec[nova-cell_v2-discover_hosts]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:14:16,113 INFO: [mNotice: /Stage[main]/Glance::Api/Service[glance-api]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:14:16,116 INFO: [mNotice: /Stage[main]/Glance::Deps/Anchor[glance::service::end]: Triggered 'refresh' from 1 events[0m >2018-06-26 11:14:18,551 INFO: [mNotice: Applied catalog in 250.65 seconds[0m >2018-06-26 11:14:18,609 INFO: Changes: >2018-06-26 11:14:18,609 INFO: Total: 16 >2018-06-26 11:14:18,609 INFO: Events: >2018-06-26 11:14:18,609 INFO: Success: 16 >2018-06-26 11:14:18,610 INFO: Total: 16 >2018-06-26 11:14:18,610 INFO: Resources: >2018-06-26 11:14:18,610 INFO: Corrective change: 11 >2018-06-26 11:14:18,610 INFO: Changed: 16 >2018-06-26 11:14:18,610 INFO: Out of sync: 16 >2018-06-26 11:14:18,610 INFO: Total: 2768 >2018-06-26 11:14:18,610 INFO: Restarted: 58 >2018-06-26 11:14:18,610 INFO: Time: >2018-06-26 11:14:18,610 INFO: Filebucket: 0.00 >2018-06-26 11:14:18,610 INFO: Policy rcd: 0.00 >2018-06-26 11:14:18,610 INFO: Archive: 0.00 >2018-06-26 11:14:18,610 INFO: Nova cell v2: 0.00 >2018-06-26 11:14:18,611 INFO: Schedule: 0.00 >2018-06-26 11:14:18,611 INFO: Keystone domain: 0.00 >2018-06-26 11:14:18,611 INFO: Sysctl: 0.00 >2018-06-26 11:14:18,611 INFO: Mysql datadir: 0.00 >2018-06-26 11:14:18,611 INFO: Group: 0.00 >2018-06-26 11:14:18,611 INFO: Sysctl runtime: 0.00 >2018-06-26 11:14:18,611 INFO: Keystone tenant: 0.00 >2018-06-26 11:14:18,611 INFO: Keystone role: 0.00 >2018-06-26 11:14:18,611 INFO: Neutron api config: 0.00 >2018-06-26 11:14:18,611 INFO: Resources: 0.00 >2018-06-26 11:14:18,611 INFO: Swift config: 0.00 >2018-06-26 11:14:18,611 INFO: Cron: 0.00 >2018-06-26 11:14:18,612 INFO: Glance swift config: 0.00 >2018-06-26 11:14:18,612 INFO: Mysql database: 0.00 >2018-06-26 11:14:18,612 INFO: User: 0.00 >2018-06-26 11:14:18,612 INFO: Concat file: 0.00 >2018-06-26 11:14:18,612 INFO: Nova paste api ini: 0.00 >2018-06-26 11:14:18,612 INFO: Swift object expirer config: 0.00 >2018-06-26 11:14:18,612 INFO: Keystone service: 0.00 >2018-06-26 11:14:18,612 INFO: Mysql grant: 0.00 >2018-06-26 11:14:18,612 INFO: Ironic neutron agent config: 0.00 >2018-06-26 11:14:18,613 INFO: Keystone endpoint: 0.01 >2018-06-26 11:14:18,613 INFO: Mysql user: 0.01 >2018-06-26 11:14:18,613 INFO: Concat fragment: 0.01 >2018-06-26 11:14:18,613 INFO: Neutron l3 agent config: 0.01 >2018-06-26 11:14:18,613 INFO: Anchor: 0.01 >2018-06-26 11:14:18,613 INFO: Neutron dhcp agent config: 0.02 >2018-06-26 11:14:18,613 INFO: Neutron agent ovs: 0.02 >2018-06-26 11:14:18,613 INFO: Vs bridge: 0.05 >2018-06-26 11:14:18,613 INFO: Mistral config: 0.06 >2018-06-26 11:14:18,614 INFO: Neutron plugin ml2: 0.20 >2018-06-26 11:14:18,614 INFO: Ring account device: 0.27 >2018-06-26 11:14:18,614 INFO: Ring container device: 0.28 >2018-06-26 11:14:18,614 INFO: Ring object device: 0.29 >2018-06-26 11:14:18,614 INFO: Firewall: 0.29 >2018-06-26 11:14:18,614 INFO: Glance registry config: 0.44 >2018-06-26 11:14:18,614 INFO: Glance cache config: 0.53 >2018-06-26 11:14:18,614 INFO: Augeas: 0.57 >2018-06-26 11:14:18,614 INFO: Ironic inspector config: 0.61 >2018-06-26 11:14:18,614 INFO: Swift proxy config: 0.63 >2018-06-26 11:14:18,614 INFO: Zaqar config: 0.96 >2018-06-26 11:14:18,615 INFO: Package: 1.15 >2018-06-26 11:14:18,615 INFO: Rabbitmq plugin: 1.22 >2018-06-26 11:14:18,615 INFO: Keystone config: 1.51 >2018-06-26 11:14:18,615 INFO: File: 1.56 >2018-06-26 11:14:18,615 INFO: Nova config: 14.39 >2018-06-26 11:14:18,615 INFO: Total: 149.76 >2018-06-26 11:14:18,615 INFO: Last run: 1529991858 >2018-06-26 11:14:18,615 INFO: Config retrieval: 17.85 >2018-06-26 11:14:18,615 INFO: Heat config: 2.08 >2018-06-26 11:14:18,615 INFO: Neutron config: 2.10 >2018-06-26 11:14:18,615 INFO: Glance api config: 2.32 >2018-06-26 11:14:18,616 INFO: Service: 2.93 >2018-06-26 11:14:18,616 INFO: Exec: 24.71 >2018-06-26 11:14:18,616 INFO: Keystone user role: 25.65 >2018-06-26 11:14:18,616 INFO: Ironic config: 4.69 >2018-06-26 11:14:18,616 INFO: Keystone user: 42.32 >2018-06-26 11:14:18,616 INFO: Version: >2018-06-26 11:14:18,616 INFO: Config: 1529991589 >2018-06-26 11:14:18,616 INFO: Puppet: 4.8.2 >2018-06-26 11:14:28,668 INFO: + rc=2 >2018-06-26 11:14:28,668 INFO: + set -e >2018-06-26 11:14:28,669 INFO: + echo 'puppet apply exited with exit code 2' >2018-06-26 11:14:28,669 INFO: puppet apply exited with exit code 2 >2018-06-26 11:14:28,669 INFO: + '[' 2 '!=' 2 -a 2 '!=' 0 ']' >2018-06-26 11:14:28,671 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 50-puppet-stack-config completed >2018-06-26 11:14:28,673 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 ----------------------- PROFILING ----------------------- >2018-06-26 11:14:28,674 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 >2018-06-26 11:14:28,677 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 Target: configure.d >2018-06-26 11:14:28,678 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 >2018-06-26 11:14:28,679 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 Script Seconds >2018-06-26 11:14:28,680 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 --------------------------------------- ---------- >2018-06-26 11:14:28,681 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 >2018-06-26 11:14:28,689 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 10-hiera-disable 0.003 >2018-06-26 11:14:28,695 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 20-os-apply-config 0.161 >2018-06-26 11:14:28,700 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 30-reload-keepalived 0.012 >2018-06-26 11:14:28,705 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 40-hiera-datafiles 0.157 >2018-06-26 11:14:28,710 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 50-puppet-stack-config 284.001 >2018-06-26 11:14:28,712 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 >2018-06-26 11:14:28,713 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 --------------------- END PROFILING --------------------- >2018-06-26 11:14:28,714 INFO: [2018-06-26 11:14:28,714] (os-refresh-config) [INFO] Completed phase configure >2018-06-26 11:14:28,714 INFO: [2018-06-26 11:14:28,714] (os-refresh-config) [INFO] Starting phase post-configure >2018-06-26 11:14:28,725 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 Running /usr/libexec/os-refresh-config/post-configure.d/10-iptables >2018-06-26 11:14:28,727 INFO: + set -o pipefail >2018-06-26 11:14:28,727 INFO: + EXTERNAL_BRIDGE=br-ctlplane >2018-06-26 11:14:28,728 INFO: + iptables -w -t nat -C PREROUTING -d 169.254.169.254/32 -i br-ctlplane -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8775 >2018-06-26 11:14:28,732 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 10-iptables completed >2018-06-26 11:14:28,733 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 Running /usr/libexec/os-refresh-config/post-configure.d/80-seedstack-masquerade >2018-06-26 11:14:28,735 INFO: + RULES_SCRIPT=/var/opt/undercloud-stack/masquerade >2018-06-26 11:14:28,735 INFO: + . /var/opt/undercloud-stack/masquerade >2018-06-26 11:14:28,736 INFO: ++ IPTCOMMAND=iptables >2018-06-26 11:14:28,736 INFO: ++ [[ 192.0.3.1 =~ : ]] >2018-06-26 11:14:28,736 INFO: ++ iptables -w -t nat -F BOOTSTACK_MASQ_NEW >2018-06-26 11:14:28,737 INFO: iptables: No chain/target/match by that name. >2018-06-26 11:14:28,737 INFO: ++ true >2018-06-26 11:14:28,737 INFO: ++ iptables -w -t nat -D POSTROUTING -j BOOTSTACK_MASQ_NEW >2018-06-26 11:14:28,738 INFO: iptables v1.4.21: Couldn't load target `BOOTSTACK_MASQ_NEW':No such file or directory >2018-06-26 11:14:28,738 INFO: >2018-06-26 11:14:28,738 INFO: Try `iptables -h' or 'iptables --help' for more information. >2018-06-26 11:14:28,738 INFO: ++ true >2018-06-26 11:14:28,739 INFO: ++ iptables -w -t nat -X BOOTSTACK_MASQ_NEW >2018-06-26 11:14:28,739 INFO: iptables: No chain/target/match by that name. >2018-06-26 11:14:28,739 INFO: ++ true >2018-06-26 11:14:28,739 INFO: ++ iptables -w -t nat -N BOOTSTACK_MASQ_NEW >2018-06-26 11:14:28,741 INFO: ++ NETWORK=192.0.3.0/24 >2018-06-26 11:14:28,741 INFO: ++ NETWORKS=192.0.3.0/24, >2018-06-26 11:14:28,741 INFO: ++ NETWORKS=192.0.3.0/24 >2018-06-26 11:14:28,741 INFO: ++ iptables -w -t nat -A BOOTSTACK_MASQ_NEW -s 192.0.3.0/24 -d 192.0.3.0/24 -j RETURN >2018-06-26 11:14:28,742 INFO: ++ iptables -w -t nat -A BOOTSTACK_MASQ_NEW -s 192.0.3.0/24 -j MASQUERADE >2018-06-26 11:14:28,744 INFO: ++ iptables -w -t nat -I POSTROUTING -j BOOTSTACK_MASQ_NEW >2018-06-26 11:14:28,745 INFO: ++ iptables -w -t nat -F BOOTSTACK_MASQ >2018-06-26 11:14:28,746 INFO: ++ iptables -w -t nat -D POSTROUTING -j BOOTSTACK_MASQ >2018-06-26 11:14:28,747 INFO: ++ iptables -w -t nat -X BOOTSTACK_MASQ >2018-06-26 11:14:28,749 INFO: ++ iptables -w -t nat -E BOOTSTACK_MASQ_NEW BOOTSTACK_MASQ >2018-06-26 11:14:28,750 INFO: ++ iptables -w -D FORWARD -j REJECT --reject-with icmp-host-prohibited >2018-06-26 11:14:28,751 INFO: iptables: No chain/target/match by that name. >2018-06-26 11:14:28,751 INFO: ++ true >2018-06-26 11:14:28,751 INFO: + iptables-save >2018-06-26 11:14:28,754 INFO: + /bin/test -f /etc/sysconfig/iptables >2018-06-26 11:14:28,755 INFO: + /bin/grep -q neutron- /etc/sysconfig/iptables >2018-06-26 11:14:28,756 INFO: + /bin/sed -i /neutron-/d /etc/sysconfig/iptables >2018-06-26 11:14:28,758 INFO: + /bin/test -f /etc/sysconfig/ip6tables >2018-06-26 11:14:28,759 INFO: + /bin/grep -q neutron- /etc/sysconfig/ip6tables >2018-06-26 11:14:28,760 INFO: + /bin/test -f /etc/sysconfig/iptables >2018-06-26 11:14:28,761 INFO: + /bin/grep -v '\-m comment \--comment' /etc/sysconfig/iptables >2018-06-26 11:14:28,761 INFO: + /bin/grep -q ironic-inspector >2018-06-26 11:14:28,762 INFO: + /bin/test -f /etc/sysconfig/ip6tables >2018-06-26 11:14:28,763 INFO: + /bin/grep -q ironic-inspector >2018-06-26 11:14:28,763 INFO: + /bin/grep -v '\-m comment \--comment' /etc/sysconfig/ip6tables >2018-06-26 11:14:28,766 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 80-seedstack-masquerade completed >2018-06-26 11:14:28,767 INFO: dib-run-parts Tue Jun 26 11:14:28 IST 2018 Running /usr/libexec/os-refresh-config/post-configure.d/98-undercloud-setup >2018-06-26 11:14:28,769 INFO: + source /root/tripleo-undercloud-passwords >2018-06-26 11:14:28,770 INFO: +++ sudo hiera admin_password >2018-06-26 11:14:28,852 INFO: ++ UNDERCLOUD_ADMIN_PASSWORD=password >2018-06-26 11:14:28,852 INFO: +++ sudo hiera keystone::admin_token >2018-06-26 11:14:28,931 INFO: ++ UNDERCLOUD_ADMIN_TOKEN=793411a45b5d715032738018d72e1b026ef47233 >2018-06-26 11:14:28,932 INFO: +++ sudo hiera ceilometer::metering_secret >2018-06-26 11:14:29,006 INFO: ++ UNDERCLOUD_CEILOMETER_METERING_SECRET=41b3e67e5f6dd821e4388ace4dd9bdf520440d2d >2018-06-26 11:14:29,006 INFO: +++ sudo hiera ceilometer::keystone::authtoken::password >2018-06-26 11:14:29,081 INFO: ++ UNDERCLOUD_CEILOMETER_PASSWORD=7acbbec3c68af1fcfeb044ff7532d21028ada2a8 >2018-06-26 11:14:29,081 INFO: +++ sudo hiera snmpd_readonly_user_password >2018-06-26 11:14:29,153 INFO: ++ UNDERCLOUD_CEILOMETER_SNMPD_PASSWORD=nil >2018-06-26 11:14:29,153 INFO: +++ sudo hiera snmpd_readonly_user_name >2018-06-26 11:14:29,222 INFO: ++ UNDERCLOUD_CEILOMETER_SNMPD_USER=nil >2018-06-26 11:14:29,222 INFO: +++ sudo hiera admin_password >2018-06-26 11:14:29,293 INFO: ++ UNDERCLOUD_DB_PASSWORD=password >2018-06-26 11:14:29,293 INFO: +++ sudo hiera glance::api::keystone_password >2018-06-26 11:14:29,364 INFO: ++ UNDERCLOUD_GLANCE_PASSWORD=nil >2018-06-26 11:14:29,365 INFO: +++ sudo hiera tripleo::haproxy::haproxy_stats_password >2018-06-26 11:14:29,433 INFO: ++ UNDERCLOUD_HAPROXY_STATS_PASSWORD=4569a85c881d756464be7c58da648cbf47525c5f >2018-06-26 11:14:29,434 INFO: +++ sudo hiera heat::engine::auth_encryption_key >2018-06-26 11:14:29,505 INFO: ++ UNDERCLOUD_HEAT_ENCRYPTION_KEY=e0c341aba57c764d8fe1f87be3bd740a >2018-06-26 11:14:29,505 INFO: +++ sudo hiera heat::keystone_password >2018-06-26 11:14:29,579 INFO: ++ UNDERCLOUD_HEAT_PASSWORD=nil >2018-06-26 11:14:29,579 INFO: +++ sudo hiera heat_stack_domain_admin_password >2018-06-26 11:14:29,650 INFO: ++ UNDERCLOUD_HEAT_STACK_DOMAIN_ADMIN_PASSWORD=64eb19a5abf28775789f9559dfe55300603ae9d2 >2018-06-26 11:14:29,650 INFO: +++ sudo hiera horizon_secret_key >2018-06-26 11:14:29,720 INFO: ++ UNDERCLOUD_HORIZON_SECRET_KEY=6db597390a5629fe004c362dbd964476dcc43bdb >2018-06-26 11:14:29,721 INFO: +++ sudo hiera ironic::api::authtoken::password >2018-06-26 11:14:29,793 INFO: ++ UNDERCLOUD_IRONIC_PASSWORD=1525d9a67d1b63f0360b92976cc2c4f999f80e98 >2018-06-26 11:14:29,793 INFO: +++ sudo hiera neutron::server::auth_password >2018-06-26 11:14:29,862 INFO: ++ UNDERCLOUD_NEUTRON_PASSWORD=nil >2018-06-26 11:14:29,863 INFO: +++ sudo hiera nova::keystone::authtoken::password >2018-06-26 11:14:29,934 INFO: ++ UNDERCLOUD_NOVA_PASSWORD=0dc6868d8eb5b67438581a33f6bfec9e2983a47d >2018-06-26 11:14:29,935 INFO: +++ sudo hiera rabbit_cookie >2018-06-26 11:14:30,004 INFO: ++ UNDERCLOUD_RABBIT_COOKIE=0631ad8d93548cfcad81459e26a0af979537eb83 >2018-06-26 11:14:30,004 INFO: +++ sudo hiera rabbit_password >2018-06-26 11:14:30,074 INFO: ++ UNDERCLOUD_RABBIT_PASSWORD=nil >2018-06-26 11:14:30,074 INFO: +++ sudo hiera rabbit_username >2018-06-26 11:14:30,145 INFO: ++ UNDERCLOUD_RABBIT_USERNAME=nil >2018-06-26 11:14:30,145 INFO: +++ sudo hiera swift::swift_hash_suffix >2018-06-26 11:14:30,211 INFO: ++ UNDERCLOUD_SWIFT_HASH_SUFFIX=nil >2018-06-26 11:14:30,212 INFO: +++ sudo hiera swift::proxy::authtoken::admin_password >2018-06-26 11:14:30,281 INFO: ++ UNDERCLOUD_SWIFT_PASSWORD=nil >2018-06-26 11:14:30,282 INFO: +++ sudo hiera mistral::admin_password >2018-06-26 11:14:30,348 INFO: ++ UNDERCLOUD_MISTRAL_PASSWORD=nil >2018-06-26 11:14:30,348 INFO: +++ sudo hiera zaqar::keystone::authtoken::password >2018-06-26 11:14:30,418 INFO: ++ UNDERCLOUD_ZAQAR_PASSWORD=09fcaec3a1bba72d8515c0678c5a98096f66b972 >2018-06-26 11:14:30,418 INFO: +++ sudo hiera cinder::keystone::authtoken::password >2018-06-26 11:14:30,485 INFO: ++ UNDERCLOUD_CINDER_PASSWORD=d6ce45db7dfeaaea3e3c6f1d238229cf54a3b924 >2018-06-26 11:14:30,485 INFO: + source /root/stackrc >2018-06-26 11:14:30,485 INFO: +++ set >2018-06-26 11:14:30,485 INFO: +++ awk '{FS="="} /^OS_/ {print $1}' >2018-06-26 11:14:30,487 INFO: ++ NOVA_VERSION=1.1 >2018-06-26 11:14:30,487 INFO: ++ export NOVA_VERSION >2018-06-26 11:14:30,487 INFO: ++ OS_PASSWORD=password >2018-06-26 11:14:30,487 INFO: ++ export OS_PASSWORD >2018-06-26 11:14:30,487 INFO: ++ OS_AUTH_TYPE=password >2018-06-26 11:14:30,488 INFO: ++ export OS_AUTH_TYPE >2018-06-26 11:14:30,488 INFO: ++ OS_AUTH_URL=http://192.0.3.1:5000/ >2018-06-26 11:14:30,488 INFO: ++ export OS_AUTH_URL >2018-06-26 11:14:30,488 INFO: ++ OS_USERNAME=admin >2018-06-26 11:14:30,488 INFO: ++ OS_PROJECT_NAME=admin >2018-06-26 11:14:30,488 INFO: ++ COMPUTE_API_VERSION=1.1 >2018-06-26 11:14:30,488 INFO: ++ IRONIC_API_VERSION=1.34 >2018-06-26 11:14:30,488 INFO: ++ OS_BAREMETAL_API_VERSION=1.34 >2018-06-26 11:14:30,488 INFO: ++ OS_NO_CACHE=True >2018-06-26 11:14:30,488 INFO: ++ OS_CLOUDNAME=undercloud >2018-06-26 11:14:30,489 INFO: ++ export OS_USERNAME >2018-06-26 11:14:30,489 INFO: ++ export OS_PROJECT_NAME >2018-06-26 11:14:30,489 INFO: ++ export COMPUTE_API_VERSION >2018-06-26 11:14:30,489 INFO: ++ export IRONIC_API_VERSION >2018-06-26 11:14:30,489 INFO: ++ export OS_BAREMETAL_API_VERSION >2018-06-26 11:14:30,489 INFO: ++ export OS_NO_CACHE >2018-06-26 11:14:30,489 INFO: ++ export OS_CLOUDNAME >2018-06-26 11:14:30,489 INFO: ++ OS_IDENTITY_API_VERSION=3 >2018-06-26 11:14:30,489 INFO: ++ export OS_IDENTITY_API_VERSION >2018-06-26 11:14:30,489 INFO: ++ OS_PROJECT_DOMAIN_NAME=Default >2018-06-26 11:14:30,490 INFO: ++ export OS_PROJECT_DOMAIN_NAME >2018-06-26 11:14:30,490 INFO: ++ OS_USER_DOMAIN_NAME=Default >2018-06-26 11:14:30,490 INFO: ++ export OS_USER_DOMAIN_NAME >2018-06-26 11:14:30,490 INFO: ++ '[' -z '' ']' >2018-06-26 11:14:30,490 INFO: ++ export PS1= >2018-06-26 11:14:30,490 INFO: ++ PS1= >2018-06-26 11:14:30,490 INFO: ++ export 'PS1=${OS_CLOUDNAME:+($OS_CLOUDNAME)} ' >2018-06-26 11:14:30,490 INFO: ++ PS1='${OS_CLOUDNAME:+($OS_CLOUDNAME)} ' >2018-06-26 11:14:30,490 INFO: ++ export CLOUDPROMPT_ENABLED=1 >2018-06-26 11:14:30,490 INFO: ++ CLOUDPROMPT_ENABLED=1 >2018-06-26 11:14:30,490 INFO: + INSTACK_ROOT= >2018-06-26 11:14:30,491 INFO: + export INSTACK_ROOT >2018-06-26 11:14:30,491 INFO: + '[' -n '' ']' >2018-06-26 11:14:30,491 INFO: + '[' '!' -f /root/.ssh/authorized_keys ']' >2018-06-26 11:14:30,491 INFO: + '[' '!' -f /root/.ssh/id_rsa ']' >2018-06-26 11:14:30,491 INFO: + cat /root/.ssh/id_rsa.pub >2018-06-26 11:14:30,491 INFO: + '[' -e /usr/sbin/getenforce ']' >2018-06-26 11:14:30,491 INFO: ++ getenforce >2018-06-26 11:14:30,491 INFO: + '[' Enforcing == Enforcing ']' >2018-06-26 11:14:30,491 INFO: + set +e >2018-06-26 11:14:30,491 INFO: ++ find /root/.ssh/ -exec ls -lZ '{}' ';' >2018-06-26 11:14:30,491 INFO: ++ grep -v ssh_home_t >2018-06-26 11:14:30,498 INFO: + selinux_wrong_permission= >2018-06-26 11:14:30,498 INFO: + set -e >2018-06-26 11:14:30,498 INFO: + '[' -n '' ']' >2018-06-26 11:14:30,498 INFO: ++ openstack project show admin >2018-06-26 11:14:30,499 INFO: ++ awk '$2=="id" {print $4}' >2018-06-26 11:14:32,344 INFO: + openstack quota set --cores -1 --instances -1 --ram -1 13835fbb8e0947a9b3fa174b9a22cdb9 >2018-06-26 11:14:41,632 INFO: + rm -rf /root/.novaclient >2018-06-26 11:14:41,637 INFO: dib-run-parts Tue Jun 26 11:14:41 IST 2018 98-undercloud-setup completed >2018-06-26 11:14:41,638 INFO: dib-run-parts Tue Jun 26 11:14:41 IST 2018 Running /usr/libexec/os-refresh-config/post-configure.d/99-refresh-completed >2018-06-26 11:14:41,641 INFO: ++ os-apply-config --key completion-handle --type raw --key-default '' >2018-06-26 11:14:41,784 INFO: [2018/06/26 11:14:41 AM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json >2018-06-26 11:14:41,790 INFO: + HANDLE= >2018-06-26 11:14:41,790 INFO: ++ os-apply-config --key completion-signal --type raw --key-default '' >2018-06-26 11:14:41,946 INFO: [2018/06/26 11:14:41 AM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json >2018-06-26 11:14:41,953 INFO: + SIGNAL= >2018-06-26 11:14:41,954 INFO: ++ os-apply-config --key instance-id --type raw --key-default '' >2018-06-26 11:14:42,098 INFO: [2018/06/26 11:14:42 AM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json >2018-06-26 11:14:42,104 INFO: + ID= >2018-06-26 11:14:42,104 INFO: + '[' -n '' ']' >2018-06-26 11:14:42,104 INFO: + exit 0 >2018-06-26 11:14:42,106 INFO: dib-run-parts Tue Jun 26 11:14:42 IST 2018 99-refresh-completed completed >2018-06-26 11:14:42,107 INFO: dib-run-parts Tue Jun 26 11:14:42 IST 2018 ----------------------- PROFILING ----------------------- >2018-06-26 11:14:42,108 INFO: dib-run-parts Tue Jun 26 11:14:42 IST 2018 >2018-06-26 11:14:42,110 INFO: dib-run-parts Tue Jun 26 11:14:42 IST 2018 Target: post-configure.d >2018-06-26 11:14:42,111 INFO: dib-run-parts Tue Jun 26 11:14:42 IST 2018 >2018-06-26 11:14:42,112 INFO: dib-run-parts Tue Jun 26 11:14:42 IST 2018 Script Seconds >2018-06-26 11:14:42,113 INFO: dib-run-parts Tue Jun 26 11:14:42 IST 2018 --------------------------------------- ---------- >2018-06-26 11:14:42,114 INFO: dib-run-parts Tue Jun 26 11:14:42 IST 2018 >2018-06-26 11:14:42,121 INFO: dib-run-parts Tue Jun 26 11:14:42 IST 2018 10-iptables 0.005 >2018-06-26 11:14:42,125 INFO: dib-run-parts Tue Jun 26 11:14:42 IST 2018 80-seedstack-masquerade 0.031 >2018-06-26 11:14:42,130 INFO: dib-run-parts Tue Jun 26 11:14:42 IST 2018 98-undercloud-setup 12.867 >2018-06-26 11:14:42,135 INFO: dib-run-parts Tue Jun 26 11:14:42 IST 2018 99-refresh-completed 0.466 >2018-06-26 11:14:42,137 INFO: dib-run-parts Tue Jun 26 11:14:42 IST 2018 >2018-06-26 11:14:42,138 INFO: dib-run-parts Tue Jun 26 11:14:42 IST 2018 --------------------- END PROFILING --------------------- >2018-06-26 11:14:42,138 INFO: [2018-06-26 11:14:42,138] (os-refresh-config) [INFO] Completed phase post-configure >2018-06-26 11:14:42,147 INFO: os-refresh-config completed successfully >2018-06-26 11:14:42,347 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:5000/ -H "Accept: application/json" -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" >2018-06-26 11:14:42,348 DEBUG: Starting new HTTP connection (1): 192.0.3.1 >2018-06-26 11:14:42,352 DEBUG: http://192.0.3.1:5000 "GET / HTTP/1.1" 300 593 >2018-06-26 11:14:42,358 DEBUG: RESP: [300] Date: Tue, 26 Jun 2018 05:44:42 GMT Server: Apache Vary: X-Auth-Token Content-Length: 593 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.0.3.1:5000/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.0.3.1:5000/v2.0/", "rel": "self"}, {"href": "https://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}} > >2018-06-26 11:14:42,359 DEBUG: Making authentication request to http://192.0.3.1:5000/v3/auth/tokens >2018-06-26 11:14:42,774 DEBUG: http://192.0.3.1:5000 "POST /v3/auth/tokens HTTP/1.1" 201 7993 >2018-06-26 11:14:42,776 DEBUG: {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "a19af673dce44d89bec07da60746e8e4", "name": "admin"}], "expires_at": "2018-06-26T09:44:42.000000Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "13835fbb8e0947a9b3fa174b9a22cdb9", "name": "admin"}, "catalog": [{"endpoints": [{"url": "http://192.0.3.1:5050", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "ab5c482d7d7a4a2dbe585fd722a6ca73"}, {"url": "http://192.0.3.1:5050", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "bb4e26d4adcd460eb44821e899be9ebb"}, {"url": "http://192.0.3.1:5050", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "dcf6a9debd8f4934aa384251e7613cb5"}], "type": "baremetal-introspection", "id": "084902dec7484ca0b731c2f39c33ab52", "name": "ironic-inspector"}, {"endpoints": [{"url": "ws://192.0.3.1:9000", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "418298d93a3544ddb99bd2015af10e45"}, {"url": "ws://192.0.3.1:9000", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "4413828ebe134d8bbad9babe9f81e7c5"}, {"url": "ws://192.0.3.1:9000", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "81fac1a734154da88c398e772f6e7cb3"}], "type": "messaging-websocket", "id": "0a6a1173fb884a5a82322e44a1fc0eea", "name": "zaqar-websocket"}, {"endpoints": [{"url": "http://192.0.3.1:8004/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "4a1d37b9994a45d4a6b041013673c2e9"}, {"url": "http://192.0.3.1:8004/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "8485f45bf105494a81c4d8ffcdbffc7d"}, {"url": "http://192.0.3.1:8004/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "fe9568bd34c94bba8d04dad0fda5435e"}], "type": "orchestration", "id": "115d8bc598754862b67fc9b7c3dcabc1", "name": "heat"}, {"endpoints": [{"url": "http://192.0.3.1:8080/v1/AUTH_13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "50904c3c2052433ca4e85e1f870a96ee"}, {"url": "http://192.0.3.1:8080/v1/AUTH_13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "826f9ad5da574268a3a9864df3423b8d"}, {"url": "http://192.0.3.1:8080", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "9bcb806ddd8f45c381a39fcb1612ef0a"}], "type": "object-store", "id": "158a9ec0b8e8442a91d539c94f7f3e0d", "name": "swift"}, {"endpoints": [{"url": "http://192.0.3.1:9696", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "8f27927fd8ea4ce29ff057a4f87484c6"}, {"url": "http://192.0.3.1:9696", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "e2f7d421188c484c8560cfc98ba36498"}, {"url": "http://192.0.3.1:9696", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "ef58d0445d78427c991ddf1935bdecca"}], "type": "network", "id": "4413143a83434a35aacc03625951c5e6", "name": "neutron"}, {"endpoints": [{"url": "http://192.0.3.1:8989/v2", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "60120820741f409a86c4fc04675e87f5"}, {"url": "http://192.0.3.1:8989/v2", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "7f57a70539474749a8732e237cd3d047"}, {"url": "http://192.0.3.1:8989/v2", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "838632e4dad7499683622be1425ae9f9"}], "type": "workflowv2", "id": "4fd514dc06964316ac0a0ce00ec69ac3", "name": "mistral"}, {"endpoints": [{"url": "http://192.0.3.1:8000/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "29f6d67693b2422da3797af84fa584d0"}, {"url": "http://192.0.3.1:8000/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "9d974513a36f4a1cb4c1a909492870f2"}, {"url": "http://192.0.3.1:8000/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "fbb25e17c719472eb5d34cad0238d098"}], "type": "cloudformation", "id": "56cff4af5f114405a3c2f0fc77a22eb3", "name": "heat-cfn"}, {"endpoints": [{"url": "http://192.0.3.1:8888", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "5e779a349b1742aabeebb6722260c17d"}, {"url": "http://192.0.3.1:8888", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "87f59b4dfb0445bca44bf310b77be097"}, {"url": "http://192.0.3.1:8888", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "980bf5c9b80b4111b5ba19dcc5274866"}], "type": "messaging", "id": "6051d4397a684f3daf43f2ec39727c26", "name": "zaqar"}, {"endpoints": [{"url": "http://192.0.3.1:8774/v2.1", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "217c1916df124498a130051b0d2929b3"}, {"url": "http://192.0.3.1:8774/v2.1", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "6e0f74f28b824f979fb5f5cc30bd3c3f"}, {"url": "http://192.0.3.1:8774/v2.1", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "ef43d40f16b24c758abce9b806f3ab04"}], "type": "compute", "id": "6670f1f004934179b4e2d17ac8ac4559", "name": "nova"}, {"endpoints": [{"url": "http://192.0.3.1:9292", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "61c209b4b8f644d191bae26716309f26"}, {"url": "http://192.0.3.1:9292", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "9447a8abbe6b4a6b86bb0299666ba978"}, {"url": "http://192.0.3.1:9292", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "dd5cb9ddfe5e496a9ae10f8dc30e3596"}], "type": "image", "id": "8d4ca6bed6b14c2e9ef1634a7f86a1bf", "name": "glance"}, {"endpoints": [{"url": "http://192.0.3.1:6385", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "68862b76576e4797ae9b44e7e920a69d"}, {"url": "http://192.0.3.1:6385", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "9b6360b588564179a2ced0f5fd842e36"}, {"url": "http://192.0.3.1:6385", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "ba8e82ab1d98411f853796bbb04778d4"}], "type": "baremetal", "id": "9f9e76a976564a1e8f0941929009e0ab", "name": "ironic"}, {"endpoints": [{"url": "http://192.0.3.1:8778/placement", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "00bb90f687b4403c8d2d4e5015504ae4"}, {"url": "http://192.0.3.1:8778/placement", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "227bf279774b40a8b6391b570de22a80"}, {"url": "http://192.0.3.1:8778/placement", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "ceaf819496d74a0496c09c9b7c9c0cd4"}], "type": "placement", "id": "ac1c0292ca3a42a1ad0ca09c9a2f2db5", "name": "placement"}, {"endpoints": [{"url": "http://192.0.3.1:5000", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "0716550d71d94a76bb684b55a29bda59"}, {"url": "http://192.0.3.1:35357", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "1d6b1d8c41204fe7a2099501c32b0288"}, {"url": "http://192.0.3.1:5000", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "e375868d7ee04e089d76ac8e49a498e3"}], "type": "identity", "id": "ce6de0f0b70b4955921edafe97432e27", "name": "keystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "6e71dffd643e4c24a0efff2673fdac32"}, "audit_ids": ["KZPrAWDFSkaeW4Ppu7l8Dg"], "issued_at": "2018-06-26T05:44:42.000000Z"}} >2018-06-26 11:14:42,798 DEBUG: found extension EntryPoint.parse('gnocchi-basic = gnocchiclient.auth:GnocchiBasicLoader') >2018-06-26 11:14:42,798 DEBUG: found extension EntryPoint.parse('gnocchi-noauth = gnocchiclient.auth:GnocchiNoAuthLoader') >2018-06-26 11:14:42,798 DEBUG: found extension EntryPoint.parse('v2token = keystoneauth1.loading._plugins.identity.v2:Token') >2018-06-26 11:14:42,798 DEBUG: found extension EntryPoint.parse('none = keystoneauth1.loading._plugins.noauth:NoAuth') >2018-06-26 11:14:42,798 DEBUG: found extension EntryPoint.parse('v3oauth1 = keystoneauth1.extras.oauth1._loading:V3OAuth1') >2018-06-26 11:14:42,798 DEBUG: found extension EntryPoint.parse('admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken') >2018-06-26 11:14:42,798 DEBUG: found extension EntryPoint.parse('v3oidcauthcode = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode') >2018-06-26 11:14:42,798 DEBUG: found extension EntryPoint.parse('v2password = keystoneauth1.loading._plugins.identity.v2:Password') >2018-06-26 11:14:42,799 DEBUG: found extension EntryPoint.parse('v3samlpassword = keystoneauth1.extras._saml2._loading:Saml2Password') >2018-06-26 11:14:42,799 DEBUG: found extension EntryPoint.parse('v3password = keystoneauth1.loading._plugins.identity.v3:Password') >2018-06-26 11:14:42,799 DEBUG: found extension EntryPoint.parse('v3adfspassword = keystoneauth1.extras._saml2._loading:ADFSPassword') >2018-06-26 11:14:42,799 DEBUG: found extension EntryPoint.parse('v3oidcaccesstoken = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken') >2018-06-26 11:14:42,799 DEBUG: found extension EntryPoint.parse('v3oidcpassword = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') >2018-06-26 11:14:42,799 DEBUG: found extension EntryPoint.parse('v3kerberos = keystoneauth1.extras.kerberos._loading:Kerberos') >2018-06-26 11:14:42,799 DEBUG: found extension EntryPoint.parse('token = keystoneauth1.loading._plugins.identity.generic:Token') >2018-06-26 11:14:42,799 DEBUG: found extension EntryPoint.parse('v3oidcclientcredentials = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClientCredentials') >2018-06-26 11:14:42,799 DEBUG: found extension EntryPoint.parse('v3tokenlessauth = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth') >2018-06-26 11:14:42,799 DEBUG: found extension EntryPoint.parse('v3token = keystoneauth1.loading._plugins.identity.v3:Token') >2018-06-26 11:14:42,799 DEBUG: found extension EntryPoint.parse('v3totp = keystoneauth1.loading._plugins.identity.v3:TOTP') >2018-06-26 11:14:42,799 DEBUG: found extension EntryPoint.parse('v3applicationcredential = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential') >2018-06-26 11:14:42,799 DEBUG: found extension EntryPoint.parse('password = keystoneauth1.loading._plugins.identity.generic:Password') >2018-06-26 11:14:42,800 DEBUG: found extension EntryPoint.parse('v3fedkerb = keystoneauth1.extras.kerberos._loading:MappedKerberos') >2018-06-26 11:14:42,800 DEBUG: found extension EntryPoint.parse('token_endpoint = openstackclient.api.auth_plugin:TokenEndpoint') >2018-06-26 11:14:42,800 DEBUG: found extension EntryPoint.parse('aodh-noauth = aodhclient.noauth:AodhNoAuthLoader') >2018-06-26 11:14:42,800 DEBUG: found extension EntryPoint.parse('noauth = cinderclient.contrib.noauth:CinderNoAuthLoader') >2018-06-26 11:14:42,800 DEBUG: found extension EntryPoint.parse('v1password = swiftclient.authv1:PasswordLoader') >2018-06-26 11:14:42,807 DEBUG: Manager envvars:unknown running task network.GET.networks >2018-06-26 11:14:42,807 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:5000/ -H "Accept: application/json" -H "User-Agent: os-client-config/1.29.0 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" >2018-06-26 11:14:42,808 DEBUG: Starting new HTTP connection (1): 192.0.3.1 >2018-06-26 11:14:42,811 DEBUG: http://192.0.3.1:5000 "GET / HTTP/1.1" 300 593 >2018-06-26 11:14:42,811 DEBUG: RESP: [300] Date: Tue, 26 Jun 2018 05:44:42 GMT Server: Apache Vary: X-Auth-Token Content-Length: 593 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.0.3.1:5000/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.0.3.1:5000/v2.0/", "rel": "self"}, {"href": "https://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}} > >2018-06-26 11:14:42,812 DEBUG: Making authentication request to http://192.0.3.1:5000/v3/auth/tokens >2018-06-26 11:14:43,243 DEBUG: http://192.0.3.1:5000 "POST /v3/auth/tokens HTTP/1.1" 201 7993 >2018-06-26 11:14:43,244 DEBUG: {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "a19af673dce44d89bec07da60746e8e4", "name": "admin"}], "expires_at": "2018-06-26T09:44:43.000000Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "13835fbb8e0947a9b3fa174b9a22cdb9", "name": "admin"}, "catalog": [{"endpoints": [{"url": "http://192.0.3.1:5050", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "ab5c482d7d7a4a2dbe585fd722a6ca73"}, {"url": "http://192.0.3.1:5050", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "bb4e26d4adcd460eb44821e899be9ebb"}, {"url": "http://192.0.3.1:5050", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "dcf6a9debd8f4934aa384251e7613cb5"}], "type": "baremetal-introspection", "id": "084902dec7484ca0b731c2f39c33ab52", "name": "ironic-inspector"}, {"endpoints": [{"url": "ws://192.0.3.1:9000", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "418298d93a3544ddb99bd2015af10e45"}, {"url": "ws://192.0.3.1:9000", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "4413828ebe134d8bbad9babe9f81e7c5"}, {"url": "ws://192.0.3.1:9000", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "81fac1a734154da88c398e772f6e7cb3"}], "type": "messaging-websocket", "id": "0a6a1173fb884a5a82322e44a1fc0eea", "name": "zaqar-websocket"}, {"endpoints": [{"url": "http://192.0.3.1:8004/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "4a1d37b9994a45d4a6b041013673c2e9"}, {"url": "http://192.0.3.1:8004/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "8485f45bf105494a81c4d8ffcdbffc7d"}, {"url": "http://192.0.3.1:8004/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "fe9568bd34c94bba8d04dad0fda5435e"}], "type": "orchestration", "id": "115d8bc598754862b67fc9b7c3dcabc1", "name": "heat"}, {"endpoints": [{"url": "http://192.0.3.1:8080/v1/AUTH_13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "50904c3c2052433ca4e85e1f870a96ee"}, {"url": "http://192.0.3.1:8080/v1/AUTH_13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "826f9ad5da574268a3a9864df3423b8d"}, {"url": "http://192.0.3.1:8080", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "9bcb806ddd8f45c381a39fcb1612ef0a"}], "type": "object-store", "id": "158a9ec0b8e8442a91d539c94f7f3e0d", "name": "swift"}, {"endpoints": [{"url": "http://192.0.3.1:9696", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "8f27927fd8ea4ce29ff057a4f87484c6"}, {"url": "http://192.0.3.1:9696", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "e2f7d421188c484c8560cfc98ba36498"}, {"url": "http://192.0.3.1:9696", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "ef58d0445d78427c991ddf1935bdecca"}], "type": "network", "id": "4413143a83434a35aacc03625951c5e6", "name": "neutron"}, {"endpoints": [{"url": "http://192.0.3.1:8989/v2", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "60120820741f409a86c4fc04675e87f5"}, {"url": "http://192.0.3.1:8989/v2", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "7f57a70539474749a8732e237cd3d047"}, {"url": "http://192.0.3.1:8989/v2", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "838632e4dad7499683622be1425ae9f9"}], "type": "workflowv2", "id": "4fd514dc06964316ac0a0ce00ec69ac3", "name": "mistral"}, {"endpoints": [{"url": "http://192.0.3.1:8000/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "29f6d67693b2422da3797af84fa584d0"}, {"url": "http://192.0.3.1:8000/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "9d974513a36f4a1cb4c1a909492870f2"}, {"url": "http://192.0.3.1:8000/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "fbb25e17c719472eb5d34cad0238d098"}], "type": "cloudformation", "id": "56cff4af5f114405a3c2f0fc77a22eb3", "name": "heat-cfn"}, {"endpoints": [{"url": "http://192.0.3.1:8888", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "5e779a349b1742aabeebb6722260c17d"}, {"url": "http://192.0.3.1:8888", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "87f59b4dfb0445bca44bf310b77be097"}, {"url": "http://192.0.3.1:8888", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "980bf5c9b80b4111b5ba19dcc5274866"}], "type": "messaging", "id": "6051d4397a684f3daf43f2ec39727c26", "name": "zaqar"}, {"endpoints": [{"url": "http://192.0.3.1:8774/v2.1", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "217c1916df124498a130051b0d2929b3"}, {"url": "http://192.0.3.1:8774/v2.1", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "6e0f74f28b824f979fb5f5cc30bd3c3f"}, {"url": "http://192.0.3.1:8774/v2.1", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "ef43d40f16b24c758abce9b806f3ab04"}], "type": "compute", "id": "6670f1f004934179b4e2d17ac8ac4559", "name": "nova"}, {"endpoints": [{"url": "http://192.0.3.1:9292", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "61c209b4b8f644d191bae26716309f26"}, {"url": "http://192.0.3.1:9292", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "9447a8abbe6b4a6b86bb0299666ba978"}, {"url": "http://192.0.3.1:9292", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "dd5cb9ddfe5e496a9ae10f8dc30e3596"}], "type": "image", "id": "8d4ca6bed6b14c2e9ef1634a7f86a1bf", "name": "glance"}, {"endpoints": [{"url": "http://192.0.3.1:6385", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "68862b76576e4797ae9b44e7e920a69d"}, {"url": "http://192.0.3.1:6385", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "9b6360b588564179a2ced0f5fd842e36"}, {"url": "http://192.0.3.1:6385", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "ba8e82ab1d98411f853796bbb04778d4"}], "type": "baremetal", "id": "9f9e76a976564a1e8f0941929009e0ab", "name": "ironic"}, {"endpoints": [{"url": "http://192.0.3.1:8778/placement", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "00bb90f687b4403c8d2d4e5015504ae4"}, {"url": "http://192.0.3.1:8778/placement", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "227bf279774b40a8b6391b570de22a80"}, {"url": "http://192.0.3.1:8778/placement", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "ceaf819496d74a0496c09c9b7c9c0cd4"}], "type": "placement", "id": "ac1c0292ca3a42a1ad0ca09c9a2f2db5", "name": "placement"}, {"endpoints": [{"url": "http://192.0.3.1:5000", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "0716550d71d94a76bb684b55a29bda59"}, {"url": "http://192.0.3.1:35357", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "1d6b1d8c41204fe7a2099501c32b0288"}, {"url": "http://192.0.3.1:5000", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "e375868d7ee04e089d76ac8e49a498e3"}], "type": "identity", "id": "ce6de0f0b70b4955921edafe97432e27", "name": "keystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "6e71dffd643e4c24a0efff2673fdac32"}, "audit_ids": ["Q5W4o9eUSiaH7eYWPJmYyQ"], "issued_at": "2018-06-26T05:44:43.000000Z"}} >2018-06-26 11:14:43,245 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:9696 -H "Accept: application/json" -H "User-Agent: os-client-config/1.29.0 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" >2018-06-26 11:14:43,246 DEBUG: Starting new HTTP connection (1): 192.0.3.1 >2018-06-26 11:14:43,250 DEBUG: http://192.0.3.1:9696 "GET / HTTP/1.1" 200 118 >2018-06-26 11:14:43,251 DEBUG: RESP: [200] Content-Length: 118 Content-Type: application/json Date: Tue, 26 Jun 2018 05:44:43 GMT Connection: keep-alive >RESP BODY: {"versions": [{"status": "CURRENT", "id": "v2.0", "links": [{"href": "http://192.0.3.1:9696/v2.0/", "rel": "self"}]}]} > >2018-06-26 11:14:43,251 DEBUG: REQ: curl -g -i -X GET "http://192.0.3.1:9696/v2.0/networks?name=ctlplane" -H "User-Agent: os-client-config/1.29.0 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}c00e891b222678a3d7c6712c0d6b876d865cf550" >2018-06-26 11:14:44,304 DEBUG: http://192.0.3.1:9696 "GET /v2.0/networks?name=ctlplane HTTP/1.1" 200 695 >2018-06-26 11:14:44,304 DEBUG: RESP: [200] Content-Type: application/json Content-Length: 695 X-Openstack-Request-Id: req-baa76651-ce36-46d2-8c06-923f3f0c729b Date: Tue, 26 Jun 2018 05:44:44 GMT Connection: keep-alive >RESP BODY: {"networks":[{"provider:physical_network":"ctlplane","ipv6_address_scope":null,"revision_number":7,"port_security_enabled":true,"mtu":1500,"id":"48742777-a2f8-4d43-915d-297b118c7e21","router:external":false,"availability_zone_hints":[],"availability_zones":["nova"],"ipv4_address_scope":null,"shared":false,"project_id":"13835fbb8e0947a9b3fa174b9a22cdb9","l2_adjacency":true,"status":"ACTIVE","subnets":["332dbcc3-3d16-4e17-bcf5-1aed566bcee7"],"description":"","tags":[],"updated_at":"2018-06-26T04:25:55Z","provider:segmentation_id":null,"name":"ctlplane","admin_state_up":true,"tenant_id":"13835fbb8e0947a9b3fa174b9a22cdb9","created_at":"2018-06-26T04:25:54Z","provider:network_type":"flat"}]} > >2018-06-26 11:14:44,305 DEBUG: GET call to network for http://192.0.3.1:9696/v2.0/networks?name=ctlplane used request id req-baa76651-ce36-46d2-8c06-923f3f0c729b >2018-06-26 11:14:44,305 DEBUG: Manager envvars:unknown ran task network.GET.networks in 1.49785709381s >2018-06-26 11:14:44,306 INFO: Not creating ctlplane network, because it already exists. >2018-06-26 11:14:44,306 DEBUG: Manager envvars:unknown running task network.GET.subnets >2018-06-26 11:14:44,308 DEBUG: REQ: curl -g -i -X GET "http://192.0.3.1:9696/v2.0/subnets?network_id=48742777-a2f8-4d43-915d-297b118c7e21&cidr=192.0.3.0%2F24" -H "User-Agent: os-client-config/1.29.0 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}c00e891b222678a3d7c6712c0d6b876d865cf550" >2018-06-26 11:14:44,352 DEBUG: http://192.0.3.1:9696 "GET /v2.0/subnets?network_id=48742777-a2f8-4d43-915d-297b118c7e21&cidr=192.0.3.0%2F24 HTTP/1.1" 200 692 >2018-06-26 11:14:44,352 DEBUG: RESP: [200] Content-Type: application/json Content-Length: 692 X-Openstack-Request-Id: req-6c83134f-b280-4773-8f61-c5bd371e104d Date: Tue, 26 Jun 2018 05:44:44 GMT Connection: keep-alive >RESP BODY: {"subnets":[{"updated_at":"2018-06-26T04:25:55Z","ipv6_ra_mode":null,"allocation_pools":[{"start":"192.0.3.5","end":"192.0.3.24"}],"host_routes":[{"nexthop":"192.0.3.1","destination":"169.254.169.254/32"}],"revision_number":0,"ipv6_address_mode":null,"id":"332dbcc3-3d16-4e17-bcf5-1aed566bcee7","dns_nameservers":[],"gateway_ip":"192.0.3.1","project_id":"13835fbb8e0947a9b3fa174b9a22cdb9","description":"","tags":[],"cidr":"192.0.3.0/24","subnetpool_id":null,"service_types":[],"name":"ctlplane-subnet","enable_dhcp":true,"segment_id":null,"network_id":"48742777-a2f8-4d43-915d-297b118c7e21","tenant_id":"13835fbb8e0947a9b3fa174b9a22cdb9","created_at":"2018-06-26T04:25:55Z","ip_version":4}]} > >2018-06-26 11:14:44,353 DEBUG: GET call to network for http://192.0.3.1:9696/v2.0/subnets?network_id=48742777-a2f8-4d43-915d-297b118c7e21&cidr=192.0.3.0%2F24 used request id req-6c83134f-b280-4773-8f61-c5bd371e104d >2018-06-26 11:14:44,353 DEBUG: Manager envvars:unknown ran task network.GET.subnets in 0.046159029007s >2018-06-26 11:14:44,353 WARNING: Local subnet ctlplane-subnet already exists and is not associated with a network segment. Any additional subnets will be ignored. >2018-06-26 11:14:44,354 DEBUG: Manager envvars:unknown running task network.PUT.subnets >2018-06-26 11:14:44,355 DEBUG: REQ: curl -g -i -X PUT http://192.0.3.1:9696/v2.0/subnets/332dbcc3-3d16-4e17-bcf5-1aed566bcee7 -H "User-Agent: os-client-config/1.29.0 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "Content-Type: application/json" -H "X-Auth-Token: {SHA1}c00e891b222678a3d7c6712c0d6b876d865cf550" -d '{"subnet": {"gateway_ip": "192.0.3.1", "allocation_pools": [{"start": "192.0.3.5", "end": "192.0.3.24"}], "host_routes": [{"nexthop": "192.0.3.1", "destination": "169.254.169.254/32"}], "name": "ctlplane-subnet"}}' >2018-06-26 11:14:44,729 DEBUG: http://192.0.3.1:9696 "PUT /v2.0/subnets/332dbcc3-3d16-4e17-bcf5-1aed566bcee7 HTTP/1.1" 200 689 >2018-06-26 11:14:44,730 DEBUG: RESP: [200] Content-Type: application/json Content-Length: 689 X-Openstack-Request-Id: req-74530f90-b93a-4261-82a3-9c0c9cc6b893 Date: Tue, 26 Jun 2018 05:44:44 GMT Connection: keep-alive >RESP BODY: {"subnet":{"updated_at":"2018-06-26T05:44:44Z","ipv6_ra_mode":null,"allocation_pools":[{"start":"192.0.3.5","end":"192.0.3.24"}],"host_routes":[{"destination":"169.254.169.254/32","nexthop":"192.0.3.1"}],"revision_number":1,"ipv6_address_mode":null,"id":"332dbcc3-3d16-4e17-bcf5-1aed566bcee7","dns_nameservers":[],"gateway_ip":"192.0.3.1","project_id":"13835fbb8e0947a9b3fa174b9a22cdb9","description":"","tags":[],"cidr":"192.0.3.0/24","subnetpool_id":null,"service_types":[],"name":"ctlplane-subnet","enable_dhcp":true,"segment_id":null,"network_id":"48742777-a2f8-4d43-915d-297b118c7e21","tenant_id":"13835fbb8e0947a9b3fa174b9a22cdb9","created_at":"2018-06-26T04:25:55Z","ip_version":4}} > >2018-06-26 11:14:44,730 DEBUG: PUT call to network for http://192.0.3.1:9696/v2.0/subnets/332dbcc3-3d16-4e17-bcf5-1aed566bcee7 used request id req-74530f90-b93a-4261-82a3-9c0c9cc6b893 >2018-06-26 11:14:44,730 DEBUG: Manager envvars:unknown ran task network.PUT.subnets in 0.375768899918s >2018-06-26 11:14:44,731 INFO: Subnet updated openstack.network.v2.subnet.Subnet(service_types=[], description=, enable_dhcp=True, tags=[], network_id=48742777-a2f8-4d43-915d-297b118c7e21, tenant_id=13835fbb8e0947a9b3fa174b9a22cdb9, created_at=2018-06-26T04:25:55Z, segment_id=None, dns_nameservers=[], updated_at=2018-06-26T05:44:44Z, gateway_ip=192.0.3.1, ipv6_ra_mode=None, allocation_pools=[{u'start': u'192.0.3.5', u'end': u'192.0.3.24'}], host_routes=[{u'nexthop': u'192.0.3.1', u'destination': u'169.254.169.254/32'}], revision_number=1, ip_version=4, ipv6_address_mode=None, cidr=192.0.3.0/24, id=332dbcc3-3d16-4e17-bcf5-1aed566bcee7, subnetpool_id=None, name=ctlplane-subnet) >2018-06-26 11:14:44,733 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8774/v2.1/os-keypairs/default -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:14:44,734 DEBUG: Starting new HTTP connection (1): 192.0.3.1 >2018-06-26 11:14:50,145 DEBUG: http://192.0.3.1:8774 "GET /v2.1/os-keypairs/default HTTP/1.1" 200 539 >2018-06-26 11:14:50,146 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 05:44:44 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-2d88384c-a67c-4d93-8b54-9fe04ecb0c57 x-compute-request-id: req-2d88384c-a67c-4d93-8b54-9fe04ecb0c57 Content-Encoding: gzip Content-Length: 539 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"keypair": {"public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAnRvrF8qTXSZberCM0HevnZssGDRXpXNMBGnB+94RdZaQWMLBWRPbCacBPwKg+gBhN+B4PfWXFI8+wtJj0ED0/nD3coxMtUUvO8aM0it7Wiof3vG09P+J6wkFeah9I/RxWqa2tHVM20aiIyv4J9i+F0xQNtaJcEOG2AaEoZzOul1zFlkOf7QskMf4RcqxJStOorTCX29zEB79NwL2cO8rMLefQkNlCVF9k2lmtgDFPBkIN6eqwVl+BcgjxRYyjZEOrZyI7ZpMmay09x9XGEzUj9JC+Bf1DZltmoPz/8lQp3QvGCSI23PnpQC8tTDCAnvV358mkCZX+l8vftPU/hSH sudheer@facebook.local.com", "user_id": "6e71dffd643e4c24a0efff2673fdac32", "name": "default", "deleted": false, "created_at": "2018-06-26T04:26:07.000000", "updated_at": null, "fingerprint": "c6:1c:5d:f7:80:25:f9:b2:e8:66:6d:da:8e:95:fc:a7", "deleted_at": null, "id": 1}} > >2018-06-26 11:14:50,146 DEBUG: GET call to compute for http://192.0.3.1:8774/v2.1/os-keypairs/default used request id req-2d88384c-a67c-4d93-8b54-9fe04ecb0c57 >2018-06-26 11:14:50,165 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8774/v2.1/flavors/detail -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:14:55,762 DEBUG: http://192.0.3.1:8774 "GET /v2.1/flavors/detail HTTP/1.1" 200 504 >2018-06-26 11:14:55,764 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 05:44:50 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-b5d57cf7-d8fa-409c-90ab-8e0ea2bc37be x-compute-request-id: req-b5d57cf7-d8fa-409c-90ab-8e0ea2bc37be Content-Encoding: gzip Content-Length: 504 Keep-Alive: timeout=15, max=99 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"flavors": [{"name": "ceph-storage", "links": [{"href": "http://192.0.3.1:8774/v2.1/flavors/0cfb511e-5a16-435a-9a69-6982cebe033a", "rel": "self"}, {"href": "http://192.0.3.1:8774/flavors/0cfb511e-5a16-435a-9a69-6982cebe033a", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "0cfb511e-5a16-435a-9a69-6982cebe033a"}, {"name": "control", "links": [{"href": "http://192.0.3.1:8774/v2.1/flavors/75e8eb94-aee7-482a-80b3-d97ac8e2fb47", "rel": "self"}, {"href": "http://192.0.3.1:8774/flavors/75e8eb94-aee7-482a-80b3-d97ac8e2fb47", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "75e8eb94-aee7-482a-80b3-d97ac8e2fb47"}, {"name": "swift-storage", "links": [{"href": "http://192.0.3.1:8774/v2.1/flavors/9149f7f2-27d8-46ba-b434-3115be9b3078", "rel": "self"}, {"href": "http://192.0.3.1:8774/flavors/9149f7f2-27d8-46ba-b434-3115be9b3078", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "9149f7f2-27d8-46ba-b434-3115be9b3078"}, {"name": "block-storage", "links": [{"href": "http://192.0.3.1:8774/v2.1/flavors/bbfe7233-396d-4aa2-b008-5b64ea0e7329", "rel": "self"}, {"href": "http://192.0.3.1:8774/flavors/bbfe7233-396d-4aa2-b008-5b64ea0e7329", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "bbfe7233-396d-4aa2-b008-5b64ea0e7329"}, {"name": "baremetal", "links": [{"href": "http://192.0.3.1:8774/v2.1/flavors/eb2b4c19-4dc3-4219-a407-921d5349dee3", "rel": "self"}, {"href": "http://192.0.3.1:8774/flavors/eb2b4c19-4dc3-4219-a407-921d5349dee3", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "eb2b4c19-4dc3-4219-a407-921d5349dee3"}, {"name": "compute", "links": [{"href": "http://192.0.3.1:8774/v2.1/flavors/eca892fc-d33a-408d-9611-e9fee658ce88", "rel": "self"}, {"href": "http://192.0.3.1:8774/flavors/eca892fc-d33a-408d-9611-e9fee658ce88", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "eca892fc-d33a-408d-9611-e9fee658ce88"}]} > >2018-06-26 11:14:55,764 DEBUG: GET call to compute for http://192.0.3.1:8774/v2.1/flavors/detail used request id req-b5d57cf7-d8fa-409c-90ab-8e0ea2bc37be >2018-06-26 11:14:55,765 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:6385/v1/nodes/?fields=uuid,resource_class -H "X-OpenStack-Ironic-API-Version: 1.21" -H "User-Agent: python-ironicclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:14:55,766 DEBUG: Starting new HTTP connection (1): 192.0.3.1 >2018-06-26 11:14:55,883 DEBUG: http://192.0.3.1:6385 "GET /v1/nodes/?fields=uuid,resource_class HTTP/1.1" 200 13 >2018-06-26 11:14:55,884 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 05:44:55 GMT Server: Apache X-OpenStack-Ironic-API-Minimum-Version: 1.1 X-OpenStack-Ironic-API-Maximum-Version: 1.38 X-OpenStack-Ironic-API-Version: 1.21 Openstack-Request-Id: req-6dc7e721-88f8-43ce-836f-b490dcfd18ef Content-Length: 13 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"nodes": []} > >2018-06-26 11:14:55,885 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8774/v2.1/flavors/detail -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:01,454 DEBUG: http://192.0.3.1:8774 "GET /v2.1/flavors/detail HTTP/1.1" 200 504 >2018-06-26 11:15:01,456 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 05:44:55 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-75d265b5-e0a8-4047-89f5-64d443d275f6 x-compute-request-id: req-75d265b5-e0a8-4047-89f5-64d443d275f6 Content-Encoding: gzip Content-Length: 504 Keep-Alive: timeout=15, max=98 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"flavors": [{"name": "ceph-storage", "links": [{"href": "http://192.0.3.1:8774/v2.1/flavors/0cfb511e-5a16-435a-9a69-6982cebe033a", "rel": "self"}, {"href": "http://192.0.3.1:8774/flavors/0cfb511e-5a16-435a-9a69-6982cebe033a", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "0cfb511e-5a16-435a-9a69-6982cebe033a"}, {"name": "control", "links": [{"href": "http://192.0.3.1:8774/v2.1/flavors/75e8eb94-aee7-482a-80b3-d97ac8e2fb47", "rel": "self"}, {"href": "http://192.0.3.1:8774/flavors/75e8eb94-aee7-482a-80b3-d97ac8e2fb47", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "75e8eb94-aee7-482a-80b3-d97ac8e2fb47"}, {"name": "swift-storage", "links": [{"href": "http://192.0.3.1:8774/v2.1/flavors/9149f7f2-27d8-46ba-b434-3115be9b3078", "rel": "self"}, {"href": "http://192.0.3.1:8774/flavors/9149f7f2-27d8-46ba-b434-3115be9b3078", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "9149f7f2-27d8-46ba-b434-3115be9b3078"}, {"name": "block-storage", "links": [{"href": "http://192.0.3.1:8774/v2.1/flavors/bbfe7233-396d-4aa2-b008-5b64ea0e7329", "rel": "self"}, {"href": "http://192.0.3.1:8774/flavors/bbfe7233-396d-4aa2-b008-5b64ea0e7329", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "bbfe7233-396d-4aa2-b008-5b64ea0e7329"}, {"name": "baremetal", "links": [{"href": "http://192.0.3.1:8774/v2.1/flavors/eb2b4c19-4dc3-4219-a407-921d5349dee3", "rel": "self"}, {"href": "http://192.0.3.1:8774/flavors/eb2b4c19-4dc3-4219-a407-921d5349dee3", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "eb2b4c19-4dc3-4219-a407-921d5349dee3"}, {"name": "compute", "links": [{"href": "http://192.0.3.1:8774/v2.1/flavors/eca892fc-d33a-408d-9611-e9fee658ce88", "rel": "self"}, {"href": "http://192.0.3.1:8774/flavors/eca892fc-d33a-408d-9611-e9fee658ce88", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "eca892fc-d33a-408d-9611-e9fee658ce88"}]} > >2018-06-26 11:15:01,456 DEBUG: GET call to compute for http://192.0.3.1:8774/v2.1/flavors/detail used request id req-75d265b5-e0a8-4047-89f5-64d443d275f6 >2018-06-26 11:15:01,456 INFO: Not creating flavor "baremetal" because it already exists. >2018-06-26 11:15:01,458 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8774/v2.1/flavors/eb2b4c19-4dc3-4219-a407-921d5349dee3/os-extra_specs -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:01,634 DEBUG: http://192.0.3.1:8774 "GET /v2.1/flavors/eb2b4c19-4dc3-4219-a407-921d5349dee3/os-extra_specs HTTP/1.1" 200 134 >2018-06-26 11:15:01,635 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 05:45:01 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-05f55ec8-70ce-4a5f-a590-d9fc2e7211b4 x-compute-request-id: req-05f55ec8-70ce-4a5f-a590-d9fc2e7211b4 Content-Encoding: gzip Content-Length: 134 Keep-Alive: timeout=15, max=97 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"extra_specs": {"capabilities:boot_option": "local", "resources:CUSTOM_BAREMETAL": "1", "resources:MEMORY_MB": "0", "resources:VCPU": "0", "resources:DISK_GB": "0"}} > >2018-06-26 11:15:01,635 DEBUG: GET call to compute for http://192.0.3.1:8774/v2.1/flavors/eb2b4c19-4dc3-4219-a407-921d5349dee3/os-extra_specs used request id req-05f55ec8-70ce-4a5f-a590-d9fc2e7211b4 >2018-06-26 11:15:01,636 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/flavors/eb2b4c19-4dc3-4219-a407-921d5349dee3/os-extra_specs -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '{"extra_specs": {"capabilities:boot_option": "local", "resources:MEMORY_MB": "0", "resources:VCPU": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:DISK_GB": "0"}}' >2018-06-26 11:15:01,778 DEBUG: http://192.0.3.1:8774 "POST /v2.1/flavors/eb2b4c19-4dc3-4219-a407-921d5349dee3/os-extra_specs HTTP/1.1" 200 134 >2018-06-26 11:15:01,779 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 05:45:01 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-92ebe17f-2aae-4b96-b3ed-42c2c1d8435f x-compute-request-id: req-92ebe17f-2aae-4b96-b3ed-42c2c1d8435f Content-Encoding: gzip Content-Length: 134 Keep-Alive: timeout=15, max=96 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"extra_specs": {"capabilities:boot_option": "local", "resources:VCPU": "0", "resources:MEMORY_MB": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:DISK_GB": "0"}} > >2018-06-26 11:15:01,779 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/flavors/eb2b4c19-4dc3-4219-a407-921d5349dee3/os-extra_specs used request id req-92ebe17f-2aae-4b96-b3ed-42c2c1d8435f >2018-06-26 11:15:01,779 INFO: Flavor baremetal updated to use custom resource class baremetal >2018-06-26 11:15:01,779 INFO: Not creating flavor "control" because it already exists. >2018-06-26 11:15:01,782 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8774/v2.1/flavors/75e8eb94-aee7-482a-80b3-d97ac8e2fb47/os-extra_specs -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:01,801 DEBUG: http://192.0.3.1:8774 "GET /v2.1/flavors/75e8eb94-aee7-482a-80b3-d97ac8e2fb47/os-extra_specs HTTP/1.1" 200 149 >2018-06-26 11:15:01,802 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 05:45:01 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-2d50ea12-1ccb-4efb-9cb1-f95464599e27 x-compute-request-id: req-2d50ea12-1ccb-4efb-9cb1-f95464599e27 Content-Encoding: gzip Content-Length: 149 Keep-Alive: timeout=15, max=95 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"extra_specs": {"resources:CUSTOM_BAREMETAL": "1", "resources:MEMORY_MB": "0", "resources:DISK_GB": "0", "capabilities:boot_option": "local", "resources:VCPU": "0", "capabilities:profile": "control"}} > >2018-06-26 11:15:01,803 DEBUG: GET call to compute for http://192.0.3.1:8774/v2.1/flavors/75e8eb94-aee7-482a-80b3-d97ac8e2fb47/os-extra_specs used request id req-2d50ea12-1ccb-4efb-9cb1-f95464599e27 >2018-06-26 11:15:01,805 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/flavors/75e8eb94-aee7-482a-80b3-d97ac8e2fb47/os-extra_specs -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '{"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "control", "resources:MEMORY_MB": "0", "resources:VCPU": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:DISK_GB": "0"}}' >2018-06-26 11:15:01,993 DEBUG: http://192.0.3.1:8774 "POST /v2.1/flavors/75e8eb94-aee7-482a-80b3-d97ac8e2fb47/os-extra_specs HTTP/1.1" 200 149 >2018-06-26 11:15:01,994 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 05:45:01 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-0ecf3679-e98f-4fb8-8bf8-1f1a1d07ac14 x-compute-request-id: req-0ecf3679-e98f-4fb8-8bf8-1f1a1d07ac14 Content-Encoding: gzip Content-Length: 149 Keep-Alive: timeout=15, max=94 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "control", "resources:MEMORY_MB": "0", "resources:DISK_GB": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:VCPU": "0"}} > >2018-06-26 11:15:01,994 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/flavors/75e8eb94-aee7-482a-80b3-d97ac8e2fb47/os-extra_specs used request id req-0ecf3679-e98f-4fb8-8bf8-1f1a1d07ac14 >2018-06-26 11:15:01,994 INFO: Flavor control updated to use custom resource class baremetal >2018-06-26 11:15:01,995 INFO: Not creating flavor "compute" because it already exists. >2018-06-26 11:15:01,996 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8774/v2.1/flavors/eca892fc-d33a-408d-9611-e9fee658ce88/os-extra_specs -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:02,014 DEBUG: http://192.0.3.1:8774 "GET /v2.1/flavors/eca892fc-d33a-408d-9611-e9fee658ce88/os-extra_specs HTTP/1.1" 200 149 >2018-06-26 11:15:02,015 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 05:45:01 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-f9c125f4-56ae-47a1-9990-0a51d03d1b97 x-compute-request-id: req-f9c125f4-56ae-47a1-9990-0a51d03d1b97 Content-Encoding: gzip Content-Length: 149 Keep-Alive: timeout=15, max=93 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"extra_specs": {"resources:CUSTOM_BAREMETAL": "1", "resources:MEMORY_MB": "0", "resources:DISK_GB": "0", "capabilities:boot_option": "local", "resources:VCPU": "0", "capabilities:profile": "compute"}} > >2018-06-26 11:15:02,015 DEBUG: GET call to compute for http://192.0.3.1:8774/v2.1/flavors/eca892fc-d33a-408d-9611-e9fee658ce88/os-extra_specs used request id req-f9c125f4-56ae-47a1-9990-0a51d03d1b97 >2018-06-26 11:15:02,016 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/flavors/eca892fc-d33a-408d-9611-e9fee658ce88/os-extra_specs -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '{"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "compute", "resources:MEMORY_MB": "0", "resources:VCPU": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:DISK_GB": "0"}}' >2018-06-26 11:15:02,216 DEBUG: http://192.0.3.1:8774 "POST /v2.1/flavors/eca892fc-d33a-408d-9611-e9fee658ce88/os-extra_specs HTTP/1.1" 200 150 >2018-06-26 11:15:02,217 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 05:45:02 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-10a93525-fffa-4332-afaa-b2868fa2354e x-compute-request-id: req-10a93525-fffa-4332-afaa-b2868fa2354e Content-Encoding: gzip Content-Length: 150 Keep-Alive: timeout=15, max=92 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "compute", "resources:MEMORY_MB": "0", "resources:DISK_GB": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:VCPU": "0"}} > >2018-06-26 11:15:02,217 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/flavors/eca892fc-d33a-408d-9611-e9fee658ce88/os-extra_specs used request id req-10a93525-fffa-4332-afaa-b2868fa2354e >2018-06-26 11:15:02,217 INFO: Flavor compute updated to use custom resource class baremetal >2018-06-26 11:15:02,217 INFO: Not creating flavor "ceph-storage" because it already exists. >2018-06-26 11:15:02,219 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8774/v2.1/flavors/0cfb511e-5a16-435a-9a69-6982cebe033a/os-extra_specs -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:02,335 DEBUG: http://192.0.3.1:8774 "GET /v2.1/flavors/0cfb511e-5a16-435a-9a69-6982cebe033a/os-extra_specs HTTP/1.1" 200 153 >2018-06-26 11:15:02,336 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 05:45:02 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-f8c77f5c-3601-42ce-a695-32eed29b16f6 x-compute-request-id: req-f8c77f5c-3601-42ce-a695-32eed29b16f6 Content-Encoding: gzip Content-Length: 153 Keep-Alive: timeout=15, max=91 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"extra_specs": {"resources:CUSTOM_BAREMETAL": "1", "resources:MEMORY_MB": "0", "resources:DISK_GB": "0", "capabilities:boot_option": "local", "resources:VCPU": "0", "capabilities:profile": "ceph-storage"}} > >2018-06-26 11:15:02,336 DEBUG: GET call to compute for http://192.0.3.1:8774/v2.1/flavors/0cfb511e-5a16-435a-9a69-6982cebe033a/os-extra_specs used request id req-f8c77f5c-3601-42ce-a695-32eed29b16f6 >2018-06-26 11:15:02,338 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/flavors/0cfb511e-5a16-435a-9a69-6982cebe033a/os-extra_specs -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '{"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "ceph-storage", "resources:MEMORY_MB": "0", "resources:VCPU": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:DISK_GB": "0"}}' >2018-06-26 11:15:02,532 DEBUG: http://192.0.3.1:8774 "POST /v2.1/flavors/0cfb511e-5a16-435a-9a69-6982cebe033a/os-extra_specs HTTP/1.1" 200 153 >2018-06-26 11:15:02,533 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 05:45:02 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-c5c3d2ee-3020-41b2-8a9d-b2eb4fdf80ce x-compute-request-id: req-c5c3d2ee-3020-41b2-8a9d-b2eb4fdf80ce Content-Encoding: gzip Content-Length: 153 Keep-Alive: timeout=15, max=90 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "ceph-storage", "resources:MEMORY_MB": "0", "resources:DISK_GB": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:VCPU": "0"}} > >2018-06-26 11:15:02,533 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/flavors/0cfb511e-5a16-435a-9a69-6982cebe033a/os-extra_specs used request id req-c5c3d2ee-3020-41b2-8a9d-b2eb4fdf80ce >2018-06-26 11:15:02,533 INFO: Flavor ceph-storage updated to use custom resource class baremetal >2018-06-26 11:15:02,533 INFO: Not creating flavor "block-storage" because it already exists. >2018-06-26 11:15:02,534 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8774/v2.1/flavors/bbfe7233-396d-4aa2-b008-5b64ea0e7329/os-extra_specs -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:08,154 DEBUG: http://192.0.3.1:8774 "GET /v2.1/flavors/bbfe7233-396d-4aa2-b008-5b64ea0e7329/os-extra_specs HTTP/1.1" 200 153 >2018-06-26 11:15:08,155 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 05:45:02 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-289d4c6d-29fd-41f1-9cc3-b646f55c8161 x-compute-request-id: req-289d4c6d-29fd-41f1-9cc3-b646f55c8161 Content-Encoding: gzip Content-Length: 153 Keep-Alive: timeout=15, max=89 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"extra_specs": {"resources:CUSTOM_BAREMETAL": "1", "resources:MEMORY_MB": "0", "resources:DISK_GB": "0", "capabilities:boot_option": "local", "resources:VCPU": "0", "capabilities:profile": "block-storage"}} > >2018-06-26 11:15:08,155 DEBUG: GET call to compute for http://192.0.3.1:8774/v2.1/flavors/bbfe7233-396d-4aa2-b008-5b64ea0e7329/os-extra_specs used request id req-289d4c6d-29fd-41f1-9cc3-b646f55c8161 >2018-06-26 11:15:08,157 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/flavors/bbfe7233-396d-4aa2-b008-5b64ea0e7329/os-extra_specs -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '{"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "block-storage", "resources:MEMORY_MB": "0", "resources:VCPU": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:DISK_GB": "0"}}' >2018-06-26 11:15:08,197 DEBUG: http://192.0.3.1:8774 "POST /v2.1/flavors/bbfe7233-396d-4aa2-b008-5b64ea0e7329/os-extra_specs HTTP/1.1" 200 154 >2018-06-26 11:15:08,198 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 05:45:08 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-e57d5780-64ff-4ba6-a33b-327f003b6944 x-compute-request-id: req-e57d5780-64ff-4ba6-a33b-327f003b6944 Content-Encoding: gzip Content-Length: 154 Keep-Alive: timeout=15, max=88 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "block-storage", "resources:MEMORY_MB": "0", "resources:DISK_GB": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:VCPU": "0"}} > >2018-06-26 11:15:08,198 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/flavors/bbfe7233-396d-4aa2-b008-5b64ea0e7329/os-extra_specs used request id req-e57d5780-64ff-4ba6-a33b-327f003b6944 >2018-06-26 11:15:08,199 INFO: Flavor block-storage updated to use custom resource class baremetal >2018-06-26 11:15:08,199 INFO: Not creating flavor "swift-storage" because it already exists. >2018-06-26 11:15:08,200 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8774/v2.1/flavors/9149f7f2-27d8-46ba-b434-3115be9b3078/os-extra_specs -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:08,223 DEBUG: http://192.0.3.1:8774 "GET /v2.1/flavors/9149f7f2-27d8-46ba-b434-3115be9b3078/os-extra_specs HTTP/1.1" 200 154 >2018-06-26 11:15:08,223 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 05:45:08 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-87fd0730-d8a4-47a3-b32d-51fe0be6340d x-compute-request-id: req-87fd0730-d8a4-47a3-b32d-51fe0be6340d Content-Encoding: gzip Content-Length: 154 Keep-Alive: timeout=15, max=87 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"extra_specs": {"resources:CUSTOM_BAREMETAL": "1", "resources:MEMORY_MB": "0", "resources:DISK_GB": "0", "capabilities:boot_option": "local", "resources:VCPU": "0", "capabilities:profile": "swift-storage"}} > >2018-06-26 11:15:08,223 DEBUG: GET call to compute for http://192.0.3.1:8774/v2.1/flavors/9149f7f2-27d8-46ba-b434-3115be9b3078/os-extra_specs used request id req-87fd0730-d8a4-47a3-b32d-51fe0be6340d >2018-06-26 11:15:08,225 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8774/v2.1/flavors/9149f7f2-27d8-46ba-b434-3115be9b3078/os-extra_specs -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '{"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "swift-storage", "resources:MEMORY_MB": "0", "resources:VCPU": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:DISK_GB": "0"}}' >2018-06-26 11:15:08,259 DEBUG: http://192.0.3.1:8774 "POST /v2.1/flavors/9149f7f2-27d8-46ba-b434-3115be9b3078/os-extra_specs HTTP/1.1" 200 154 >2018-06-26 11:15:08,259 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 05:45:08 GMT Server: Apache OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding x-openstack-request-id: req-4df5de89-5e1f-45e5-ad92-628624d0948a x-compute-request-id: req-4df5de89-5e1f-45e5-ad92-628624d0948a Content-Encoding: gzip Content-Length: 154 Keep-Alive: timeout=15, max=86 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"extra_specs": {"capabilities:boot_option": "local", "capabilities:profile": "swift-storage", "resources:MEMORY_MB": "0", "resources:DISK_GB": "0", "resources:CUSTOM_BAREMETAL": "1", "resources:VCPU": "0"}} > >2018-06-26 11:15:08,259 DEBUG: POST call to compute for http://192.0.3.1:8774/v2.1/flavors/9149f7f2-27d8-46ba-b434-3115be9b3078/os-extra_specs used request id req-4df5de89-5e1f-45e5-ad92-628624d0948a >2018-06-26 11:15:08,260 INFO: Flavor swift-storage updated to use custom resource class baremetal >2018-06-26 11:15:08,260 INFO: Configuring Mistral workbooks >2018-06-26 11:15:08,260 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:08,261 DEBUG: Starting new HTTP connection (1): 192.0.3.1 >2018-06-26 11:15:08,936 DEBUG: http://192.0.3.1:8989 "GET /v2/workbooks HTTP/1.1" 200 250101 >2018-06-26 11:15:08,945 DEBUG: RESP: [200] Content-Length: 250101 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:08 GMT Connection: keep-alive >RESP BODY: {"workbooks": [{"definition": "---\nversion: '2.0'\nname: tripleo.access.v1\ndescription: TripleO administration access workflows\n\nworkflows:\n\n enable_ssh_admin:\n description: >-\n This workflow creates an admin user on the overcloud nodes,\n which can then be used for connecting for automated\n administrative or deployment tasks, e.g. via Ansible. The\n workflow can be used both for Nova-managed and split-stack\n deployments, assuming the correct input values are passed\n in. The workflow defaults to Nova-managed approach, for which no\n additional parameters need to be supplied. In case of\n split-stack, temporary ssh connection details (user, key, list\n of servers) need to be provided -- these are only used\n temporarily to create the actual ssh admin user for use by\n Mistral.\n tags:\n - tripleo-common-managed\n input:\n - ssh_private_key: null\n - ssh_user: null\n - ssh_servers: []\n - overcloud_admin: tripleo-admin\n - queue_name: tripleo\n tasks:\n get_pubkey:\n action: tripleo.validations.get_pubkey\n on-success: generate_playbook\n publish:\n pubkey: <% task().result %>\n\n generate_playbook:\n on-success:\n - create_admin_via_nova: <% $.ssh_private_key = null %>\n - create_admin_via_ssh: <% $.ssh_private_key != null %>\n publish:\n create_admin_tasks:\n - name: create user <% $.overcloud_admin %>\n user:\n name: '<% $.overcloud_admin %>'\n - name: grant admin rights to user <% $.overcloud_admin %>\n copy:\n dest: /etc/sudoers.d/<% $.overcloud_admin %>\n content: |\n <% $.overcloud_admin %> ALL=(ALL) NOPASSWD:ALL\n mode: 0440\n - name: ensure .ssh dir exists for user <% $.overcloud_admin %>\n file:\n path: /home/<% $.overcloud_admin %>/.ssh\n state: directory\n owner: <% $.overcloud_admin %>\n group: <% $.overcloud_admin %>\n mode: 0700\n - name: ensure authorized_keys file exists for user <% $.overcloud_admin %>\n file:\n path: /home/<% $.overcloud_admin %>/.ssh/authorized_keys\n state: touch\n owner: <% $.overcloud_admin %>\n group: <% $.overcloud_admin %>\n mode: 0700\n - name: authorize TripleO Mistral key for user <% $.overcloud_admin %>\n lineinfile:\n path: /home/<% $.overcloud_admin %>/.ssh/authorized_keys\n line: <% $.pubkey %>\n regexp: \"Generated by TripleO\"\n\n # Nova variant\n create_admin_via_nova:\n workflow: tripleo.access.v1.create_admin_via_nova\n input:\n queue_name: <% $.queue_name %>\n ssh_servers: <% $.ssh_servers %>\n tasks: <% $.create_admin_tasks %>\n overcloud_admin: <% $.overcloud_admin %>\n\n # SSH variant\n create_admin_via_ssh:\n workflow: tripleo.access.v1.create_admin_via_ssh\n input:\n ssh_private_key: <% $.ssh_private_key %>\n ssh_user: <% $.ssh_user %>\n ssh_servers: <% $.ssh_servers %>\n tasks: <% $.create_admin_tasks %>\n\n create_admin_via_nova:\n input:\n - tasks\n - queue_name: tripleo\n - ssh_servers: []\n - overcloud_admin: tripleo-admin\n - ansible_extra_env_variables:\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n tags:\n - tripleo-common-managed\n tasks:\n get_servers:\n action: nova.servers_list\n on-success: create_admin\n publish:\n servers: <% let(root => $) -> task().result._info.where($.addresses.ctlplane.addr.any($ in $root.ssh_servers)) %>\n\n create_admin:\n workflow: tripleo.deployment.v1.deploy_on_server\n on-success: get_privkey\n with-items: server in <% $.servers %>\n input:\n server_name: <% $.server.name %>\n server_uuid: <% $.server.id %>\n queue_name: <% $.queue_name %>\n config_name: create_admin\n group: ansible\n config: |\n - hosts: localhost\n connection: local\n tasks: <% json_pp($.tasks) %>\n\n get_privkey:\n action: tripleo.validations.get_privkey\n on-success: wait_for_occ\n publish:\n privkey: <% task().result %>\n\n wait_for_occ:\n action: tripleo.ansible-playbook\n input:\n inventory:\n overcloud:\n hosts: <% $.ssh_servers.toDict($, {}) %>\n remote_user: <% $.overcloud_admin %>\n ssh_private_key: <% $.privkey %>\n extra_env_variables: <% $.ansible_extra_env_variables %>\n playbook:\n - hosts: overcloud\n gather_facts: no\n tasks:\n - name: wait for connection\n wait_for_connection:\n sleep: 5\n timeout: 300\n\n create_admin_via_ssh:\n input:\n - tasks\n - ssh_private_key\n - ssh_user\n - ssh_servers\n - ansible_extra_env_variables:\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n\n tags:\n - tripleo-common-managed\n tasks:\n write_tmp_playbook:\n action: tripleo.ansible-playbook\n input:\n inventory:\n overcloud:\n hosts: <% $.ssh_servers.toDict($, {}) %>\n remote_user: <% $.ssh_user %>\n ssh_private_key: <% $.ssh_private_key %>\n extra_env_variables: <% $.ansible_extra_env_variables %>\n become: true\n become_user: root\n playbook:\n - hosts: overcloud\n tasks: <% $.tasks %>\n", "name": "tripleo.access.v1", "tags": [], "created_at": "2018-06-26 04:26:33", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "291b24e1-7f5e-4738-92c0-8ef3a5201974"}, {"definition": "---\nversion: '2.0'\nname: tripleo.stack.v1\ndescription: TripleO Stack Workflows\n\nworkflows:\n\n wait_for_stack_complete_or_failed:\n input:\n - stack\n - timeout: 14400 # 4 hours. Default timeout of stack deployment\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n wait_for_stack_status:\n action: heat.stacks_get stack_id=<% $.stack %>\n timeout: <% $.timeout %>\n retry:\n delay: 15\n count: <% $.timeout / 15 %>\n continue-on: <% task().result.stack_status in ['CREATE_IN_PROGRESS', 'UPDATE_IN_PROGRESS', 'DELETE_IN_PROGRESS'] %>\n\n wait_for_stack_in_progress:\n input:\n - stack\n - timeout: 600 # 10 minutes. Should not take much longer for a stack to transition to IN_PROGRESS\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n wait_for_stack_status:\n action: heat.stacks_get stack_id=<% $.stack %>\n timeout: <% $.timeout %>\n retry:\n delay: 15\n count: <% $.timeout / 15 %>\n continue-on: <% task().result.stack_status in ['CREATE_COMPLETE', 'CREATE_FAILED', 'UPDATE_COMPLETE', 'UPDATE_FAILED', 'DELETE_FAILED'] %>\n\n wait_for_stack_does_not_exist:\n input:\n - stack\n - timeout: 3600\n\n tags:\n - tripleo-common-managed\n\n tasks:\n wait_for_stack_does_not_exist:\n action: heat.stacks_list\n timeout: <% $.timeout %>\n retry:\n delay: 15\n count: <% $.timeout / 15 %>\n continue-on: <% $.stack in task(wait_for_stack_does_not_exist).result.select([$.stack_name, $.id]).flatten() %>\n\n delete_stack:\n input:\n - stack\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n delete_the_stack:\n action: heat.stacks_delete stack_id=<% $.stack %>\n on-success: wait_for_stack_does_not_exist\n on-error: delete_the_stack_failed\n\n delete_the_stack_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(delete_the_stack).result %>\n\n wait_for_stack_does_not_exist:\n workflow: tripleo.stack.v1.wait_for_stack_does_not_exist stack=<% $.stack %>\n on-success: send_message\n on-error: wait_for_stack_does_not_exist_failed\n\n wait_for_stack_does_not_exist_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(wait_for_stack_does_not_exist).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.scale.v1.delete_stack\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.stack.v1", "tags": [], "created_at": "2018-06-26 04:26:34", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "6bbb5f09-3cbe-4b9c-bd26-9093251781c9"}, {"definition": "---\nversion: '2.0'\nname: tripleo.validations.v1\ndescription: TripleO Validations Workflows v1\n\nworkflows:\n\n run_validation:\n input:\n - validation_name\n - plan: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n notify_running:\n on-complete: run_validation\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validation\n payload:\n validation_name: <% $.validation_name %>\n plan: <% $.plan %>\n status: RUNNING\n execution: <% execution() %>\n\n run_validation:\n on-success: send_message\n on-error: set_status_failed\n action: tripleo.validations.run_validation validation=<% $.validation_name %> plan=<% $.plan %>\n publish:\n status: SUCCESS\n stdout: <% task().result.stdout %>\n stderr: <% task().result.stderr %>\n\n set_status_failed:\n on-complete: send_message\n publish:\n status: FAILED\n stdout: <% task(run_validation).result.stdout %>\n stderr: <% task(run_validation).result.stderr %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validation\n payload:\n validation_name: <% $.validation_name %>\n plan: <% $.plan %>\n status: <% $.get('status', 'SUCCESS') %>\n stdout: <% $.stdout %>\n stderr: <% $.stderr %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n run_validations:\n input:\n - validation_names: []\n - plan: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n notify_running:\n on-complete: run_validations\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validations\n payload:\n validation_names: <% $.validation_names %>\n plan: <% $.plan %>\n status: RUNNING\n execution: <% execution() %>\n\n run_validations:\n on-success: send_message\n on-error: set_status_failed\n workflow: tripleo.validations.v1.run_validation validation_name=<% $.validation %> plan=<% $.plan %> queue_name=<% $.queue_name %>\n with-items: validation in <% $.validation_names %>\n publish:\n status: SUCCESS\n\n set_status_failed:\n on-complete: send_message\n publish:\n status: FAILED\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validations\n payload:\n validation_names: <% $.validation_names %>\n plan: <% $.plan %>\n status: <% $.get('status', 'SUCCESS') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n run_groups:\n input:\n - group_names: []\n - plan: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n find_validations:\n on-success: notify_running\n action: tripleo.validations.list_validations groups=<% $.group_names %>\n publish:\n validations: <% task().result %>\n\n notify_running:\n on-complete: run_validation_group\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validations\n payload:\n group_names: <% $.group_names %>\n validation_names: <% $.validations.id %>\n plan: <% $.plan %>\n status: RUNNING\n execution: <% execution() %>\n\n run_validation_group:\n on-success: send_message\n on-error: set_status_failed\n workflow: tripleo.validations.v1.run_validation validation_name=<% $.validation %> plan=<% $.plan %> queue_name=<% $.queue_name %>\n with-items: validation in <% $.validations.id %>\n publish:\n status: SUCCESS\n\n set_status_failed:\n on-complete: send_message\n publish:\n status: FAILED\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_groups\n payload:\n group_names: <% $.group_names %>\n validation_names: <% $.validations.id %>\n plan: <% $.plan %>\n status: <% $.get('status', 'SUCCESS') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n list:\n input:\n - group_names: []\n tags:\n - tripleo-common-managed\n tasks:\n find_validations:\n action: tripleo.validations.list_validations groups=<% $.group_names %>\n\n list_groups:\n tags:\n - tripleo-common-managed\n tasks:\n find_groups:\n action: tripleo.validations.list_groups\n\n add_validation_ssh_key_parameter:\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n test_validations_enabled:\n action: tripleo.validations.enabled\n on-success: get_pubkey\n on-error: unset_validation_key_parameter\n\n get_pubkey:\n action: tripleo.validations.get_pubkey\n on-success: set_validation_key_parameter\n publish:\n pubkey: <% task().result %>\n\n set_validation_key_parameter:\n action: tripleo.parameters.update\n input:\n parameters:\n node_admin_extra_ssh_keys: <% $.pubkey %>\n container: <% $.container %>\n\n # NOTE(shadower): We need to clear keys from a previous deployment\n unset_validation_key_parameter:\n action: tripleo.parameters.update\n input:\n parameters:\n node_admin_extra_ssh_keys: \"\"\n container: <% $.container %>\n\n copy_ssh_key:\n input:\n # FIXME: we should stop using heat-admin as e.g. split-stack\n # environments (where Nova didn't create overcloud nodes) don't\n # have it present\n - overcloud_admin: heat-admin\n - queue_name: tripleo\n tags:\n - tripleo-common-managed\n tasks:\n get_servers:\n action: nova.servers_list\n on-success: get_pubkey\n publish:\n servers: <% task().result._info %>\n\n get_pubkey:\n action: tripleo.validations.get_pubkey\n on-success: deploy_ssh_key\n publish:\n pubkey: <% task().result %>\n\n deploy_ssh_key:\n workflow: tripleo.deployment.v1.deploy_on_server\n with-items: server in <% $.servers %>\n input:\n server_name: <% $.server.name %>\n server_uuid: <% $.server.id %>\n config: |\n #!/bin/bash\n if ! grep \"<% $.pubkey %>\" /home/<% $.overcloud_admin %>/.ssh/authorized_keys; then\n echo \"<% $.pubkey %>\" >> /home/<% $.overcloud_admin %>/.ssh/authorized_keys\n fi\n config_name: copy_ssh_key\n group: script\n queue_name: <% $.queue_name %>\n\n check_boot_images:\n input:\n - deploy_kernel_name: 'bm-deploy-kernel'\n - deploy_ramdisk_name: 'bm-deploy-ramdisk'\n - run_validations: true\n - queue_name: tripleo\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n tags:\n - tripleo-common-managed\n tasks:\n check_run_validations:\n on-complete:\n - get_images: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n get_images:\n action: glance.images_list\n on-success: check_images\n publish:\n images: <% task().result %>\n\n check_images:\n action: tripleo.validations.check_boot_images\n input:\n images: <% $.images %>\n deploy_kernel_name: <% $.deploy_kernel_name %>\n deploy_ramdisk_name: <% $.deploy_ramdisk_name %>\n on-success: send_message\n publish:\n kernel_id: <% task().result.kernel_id %>\n ramdisk_id: <% task().result.ramdisk_id %>\n warnings: <% task().result.warnings %>\n errors: <% task().result.errors %>\n on-error: send_message\n publish-on-error:\n kernel_id: <% task().result.kernel_id %>\n ramdisk_id: <% task().result.ramdisk_id %>\n warnings: <% task().result.warnings %>\n errors: <% task().result.errors %>\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.check_boot_images\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n collect_flavors:\n input:\n - roles_info: {}\n - run_validations: true\n - queue_name: tripleo\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n flavors: <% $.flavors %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_run_validations:\n on-complete:\n - check_flavors: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n check_flavors:\n action: tripleo.validations.check_flavors\n input:\n roles_info: <% $.roles_info %>\n on-success: send_message\n publish:\n flavors: <% task().result.flavors %>\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n on-error: send_message\n publish-on-error:\n flavors: {}\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.collect_flavors\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n flavors: <% $.flavors %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n check_ironic_boot_configuration:\n input:\n - kernel_id: null\n - ramdisk_id: null\n - run_validations: true\n - queue_name: tripleo\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_run_validations:\n on-complete:\n - get_ironic_nodes: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n get_ironic_nodes:\n action: ironic.node_list\n input:\n provision_state: available\n maintenance: false\n detail: true\n on-success: check_node_boot_configuration\n publish:\n nodes: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n check_node_boot_configuration:\n action: tripleo.validations.check_node_boot_configuration\n input:\n node: <% $.node %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n with-items: node in <% $.nodes %>\n on-success: send_message\n publish:\n errors: <% task().result.errors.flatten() %>\n warnings: <% task().result.warnings.flatten() %>\n on-error: send_message\n publish-on-error:\n errors: <% task().result.errors.flatten() %>\n warnings: <% task().result.warnings.flatten() %>\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.check_ironic_boot_configuration\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n verify_profiles:\n input:\n - flavors: []\n - run_validations: true\n - queue_name: tripleo\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_run_validations:\n on-complete:\n - get_ironic_nodes: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n get_ironic_nodes:\n action: ironic.node_list\n input:\n maintenance: false\n detail: true\n on-success: verify_profiles\n publish:\n nodes: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n verify_profiles:\n action: tripleo.validations.verify_profiles\n input:\n nodes: <% $.nodes %>\n flavors: <% $.flavors %>\n on-success: send_message\n publish:\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n on-error: send_message\n publish-on-error:\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.verify_profiles\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n check_default_nodes_count:\n input:\n - stack_id: overcloud\n - parameters: {}\n - default_role_counts: {}\n - run_validations: true\n - queue_name: tripleo\n output:\n statistics: <% $.statistics %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_run_validations:\n on-complete:\n - get_hypervisor_statistics: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n get_hypervisor_statistics:\n action: nova.hypervisors_statistics\n on-success: get_stack\n publish:\n statistics: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n errors: []\n warnings: []\n statistics: null\n\n get_stack:\n action: heat.stacks_get\n input:\n stack_id: <% $.stack_id %>\n on-success: get_associated_nodes\n publish:\n stack: <% task().result %>\n on-error: get_associated_nodes\n publish-on-error:\n stack: null\n\n get_associated_nodes:\n action: ironic.node_list\n input:\n associated: true\n on-success: get_available_nodes\n publish:\n associated_nodes: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n errors: []\n warnings: []\n\n get_available_nodes:\n action: ironic.node_list\n input:\n provision_state: available\n associated: false\n maintenance: false\n on-success: check_nodes_count\n publish:\n available_nodes: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n errors: []\n warnings: []\n\n check_nodes_count:\n action: tripleo.validations.check_nodes_count\n input:\n statistics: <% $.statistics %>\n stack: <% $.stack %>\n associated_nodes: <% $.associated_nodes %>\n available_nodes: <% $.available_nodes %>\n parameters: <% $.parameters %>\n default_role_counts: <% $.default_role_counts %>\n on-success: send_message\n publish:\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n statistics: null\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.check_hypervisor_stats\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n statistics: <% $.statistics %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n check_pre_deployment_validations:\n input:\n - deploy_kernel_name: 'bm-deploy-kernel'\n - deploy_ramdisk_name: 'bm-deploy-ramdisk'\n - roles_info: {}\n - stack_id: overcloud\n - parameters: {}\n - default_role_counts: {}\n - run_validations: true\n - queue_name: tripleo\n\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n flavors: <% $.flavors %>\n statistics: <% $.statistics %>\n tags:\n - tripleo-common-managed\n tasks:\n init_messages:\n on-success: check_boot_images\n publish:\n errors: []\n warnings: []\n\n check_boot_images:\n workflow: check_boot_images\n input:\n deploy_kernel_name: <% $.deploy_kernel_name %>\n deploy_ramdisk_name: <% $.deploy_ramdisk_name %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n kernel_id: <% task().result.get('kernel_id') %>\n ramdisk_id: <% task().result.get('ramdisk_id') %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n kernel_id: <% task().result.get('kernel_id') %>\n ramdisk_id: <% task().result.get('ramdisk_id') %>\n status: FAILED\n on-success: collect_flavors\n on-error: collect_flavors\n\n collect_flavors:\n workflow: collect_flavors\n input:\n roles_info: <% $.roles_info %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n flavors: <% task().result.get('flavors') %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n flavors: <% task().result.get('flavors') %>\n status: FAILED\n on-success: check_ironic_boot_configuration\n on-error: check_ironic_boot_configuration\n\n check_ironic_boot_configuration:\n workflow: check_ironic_boot_configuration\n input:\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n status: FAILED\n on-success: check_default_nodes_count\n on-error: check_default_nodes_count\n\n check_default_nodes_count:\n workflow: check_default_nodes_count\n # ironic-nova sync happens once in two minutes\n retry: count=12 delay=10\n input:\n stack_id: <% $.stack_id %>\n parameters: <% $.parameters %>\n default_role_counts: <% $.default_role_counts %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n statistics: <% task().result.get('statistics') %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n statistics: <% task().result.get('statistics') %>\n status: FAILED\n on-success: verify_profiles\n # Do not confuse user with info about profiles if the nodes\n # count is off in the first place. Skip directly to\n # send_message. (bug 1703942)\n on-error: send_message\n\n verify_profiles:\n workflow: verify_profiles\n input:\n flavors: <% $.flavors %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n status: FAILED\n on-success: send_message\n on-error: send_message\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.check_hypervisor_stats\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n flavors: <% $.flavors %>\n statistics: <% $.statistics %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.validations.v1", "tags": [], "created_at": "2018-06-26 04:26:35", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "642e8c82-6af7-4f7b-bb31-061b92e88e25"}, {"definition": "---\nversion: '2.0'\nname: tripleo.derive_params_formulas.v1\ndescription: TripleO Workflows to derive deployment parameters from the introspected data\n\nworkflows:\n\n\n dpdk_derive_params:\n description: >\n Workflow to derive parameters for DPDK service.\n input:\n - plan\n - role_name\n - hw_data # introspection data\n - user_inputs\n - derived_parameters: {}\n\n output:\n derived_parameters: <% $.derived_parameters.mergeWith($.get('dpdk_parameters', {})) %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_network_config:\n action: tripleo.parameters.get_network_config\n input:\n container: <% $.plan %>\n role_name: <% $.role_name %>\n publish:\n network_configs: <% task().result.get('network_config', []) %>\n on-success: get_dpdk_nics_numa_info\n on-error: set_status_failed_get_network_config\n\n get_dpdk_nics_numa_info:\n action: tripleo.derive_params.get_dpdk_nics_numa_info\n input:\n network_configs: <% $.network_configs %>\n inspect_data: <% $.hw_data %>\n publish:\n dpdk_nics_numa_info: <% task().result %>\n on-success:\n # TODO: Need to remove condtions here\n # adding condition and throw error in action for empty check\n - get_dpdk_nics_numa_nodes: <% $.dpdk_nics_numa_info %>\n - set_status_failed_get_dpdk_nics_numa_info: <% not $.dpdk_nics_numa_info %>\n on-error: set_status_failed_on_error_get_dpdk_nics_numa_info\n\n get_dpdk_nics_numa_nodes:\n publish:\n dpdk_nics_numa_nodes: <% $.dpdk_nics_numa_info.groupBy($.numa_node).select($[0]).orderBy($) %>\n on-success:\n - get_numa_nodes: <% $.dpdk_nics_numa_nodes %>\n - set_status_failed_get_dpdk_nics_numa_nodes: <% not $.dpdk_nics_numa_nodes %>\n\n get_numa_nodes:\n publish:\n numa_nodes: <% $.hw_data.numa_topology.ram.select($.numa_node).orderBy($) %>\n on-success:\n - get_num_phy_cores_per_numa_for_pmd: <% $.numa_nodes %>\n - set_status_failed_get_numa_nodes: <% not $.numa_nodes %>\n\n get_num_phy_cores_per_numa_for_pmd:\n publish:\n num_phy_cores_per_numa_node_for_pmd: <% $.user_inputs.get('num_phy_cores_per_numa_node_for_pmd', 0) %>\n on-success:\n - get_num_cores_per_numa_nodes: <% isInteger($.num_phy_cores_per_numa_node_for_pmd) and $.num_phy_cores_per_numa_node_for_pmd > 0 %>\n - set_status_failed_get_num_phy_cores_per_numa_for_pmd_invalid: <% not isInteger($.num_phy_cores_per_numa_node_for_pmd) %>\n - set_status_failed_get_num_phy_cores_per_numa_for_pmd_not_provided: <% $.num_phy_cores_per_numa_node_for_pmd = 0 %>\n\n # For NUMA node with DPDK nic, number of cores should be used from user input\n # For NUMA node without DPDK nic, number of cores should be 1\n get_num_cores_per_numa_nodes:\n publish:\n num_cores_per_numa_nodes: <% let(dpdk_nics_nodes => $.dpdk_nics_numa_nodes, cores => $.num_phy_cores_per_numa_node_for_pmd) -> $.numa_nodes.select(switch($ in $dpdk_nics_nodes => $cores, not $ in $dpdk_nics_nodes => 1)) %>\n on-success: get_pmd_cpus\n\n get_pmd_cpus:\n action: tripleo.derive_params.get_dpdk_core_list\n input:\n inspect_data: <% $.hw_data %>\n numa_nodes_cores_count: <% $.num_cores_per_numa_nodes %>\n publish:\n pmd_cpus: <% task().result %>\n on-success:\n - get_pmd_cpus_range_list: <% $.pmd_cpus %>\n - set_status_failed_get_pmd_cpus: <% not $.pmd_cpus %>\n on-error: set_status_failed_on_error_get_pmd_cpus\n\n get_pmd_cpus_range_list:\n action: tripleo.derive_params.convert_number_to_range_list\n input:\n num_list: <% $.pmd_cpus %>\n publish:\n pmd_cpus: <% task().result %>\n on-success: get_host_cpus\n on-error: set_status_failed_get_pmd_cpus_range_list\n\n get_host_cpus:\n workflow: tripleo.derive_params_formulas.v1.get_host_cpus\n input:\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n publish:\n host_cpus: <% task().result.get('host_cpus', '') %>\n on-success: get_sock_mem\n on-error: set_status_failed_get_host_cpus\n\n get_sock_mem:\n action: tripleo.derive_params.get_dpdk_socket_memory\n input:\n dpdk_nics_numa_info: <% $.dpdk_nics_numa_info %>\n numa_nodes: <% $.numa_nodes %>\n overhead: <% $.user_inputs.get('overhead', 800) %>\n packet_size_in_buffer: <% 4096*64 %>\n publish:\n sock_mem: <% task().result %>\n on-success:\n - get_dpdk_parameters: <% $.sock_mem %>\n - set_status_failed_get_sock_mem: <% not $.sock_mem %>\n on-error: set_status_failed_on_error_get_sock_mem\n\n get_dpdk_parameters:\n publish:\n dpdk_parameters: <% dict(concat($.role_name, 'Parameters') => dict('OvsPmdCoreList' => $.get('pmd_cpus', ''), 'OvsDpdkCoreList' => $.get('host_cpus', ''), 'OvsDpdkSocketMemory' => $.get('sock_mem', ''))) %>\n\n set_status_failed_get_network_config:\n publish:\n status: FAILED\n message: <% task(get_network_config).result %>\n on-success: fail\n\n set_status_failed_get_dpdk_nics_numa_info:\n publish:\n status: FAILED\n message: \"Unable to determine DPDK NIC's NUMA information\"\n on-success: fail\n\n set_status_failed_on_error_get_dpdk_nics_numa_info:\n publish:\n status: FAILED\n message: <% task(get_dpdk_nics_numa_info).result %>\n on-success: fail\n\n set_status_failed_get_dpdk_nics_numa_nodes:\n publish:\n status: FAILED\n message: \"Unable to determine DPDK NIC's numa nodes\"\n on-success: fail\n\n set_status_failed_get_numa_nodes:\n publish:\n status: FAILED\n message: 'Unable to determine available NUMA nodes'\n on-success: fail\n\n set_status_failed_get_num_phy_cores_per_numa_for_pmd_invalid:\n publish:\n status: FAILED\n message: <% \"num_phy_cores_per_numa_node_for_pmd user input '{0}' is invalid\".format($.num_phy_cores_per_numa_node_for_pmd) %>\n on-success: fail\n\n set_status_failed_get_num_phy_cores_per_numa_for_pmd_not_provided:\n publish:\n status: FAILED\n message: 'num_phy_cores_per_numa_node_for_pmd user input is not provided'\n on-success: fail\n\n set_status_failed_get_pmd_cpus:\n publish:\n status: FAILED\n message: 'Unable to determine OvsPmdCoreList parameter'\n on-success: fail\n\n set_status_failed_on_error_get_pmd_cpus:\n publish:\n status: FAILED\n message: <% task(get_pmd_cpus).result %>\n on-success: fail\n\n set_status_failed_get_pmd_cpus_range_list:\n publish:\n status: FAILED\n message: <% task(get_pmd_cpus_range_list).result %>\n on-success: fail\n\n set_status_failed_get_host_cpus:\n publish:\n status: FAILED\n message: <% task(get_host_cpus).result.get('message', '') %>\n on-success: fail\n\n set_status_failed_get_sock_mem:\n publish:\n status: FAILED\n message: 'Unable to determine OvsDpdkSocketMemory parameter'\n on-success: fail\n\n set_status_failed_on_error_get_sock_mem:\n publish:\n status: FAILED\n message: <% task(get_sock_mem).result %>\n on-success: fail\n\n\n sriov_derive_params:\n description: >\n This workflow derives parameters for the SRIOV feature.\n\n input:\n - role_name\n - hw_data # introspection data\n - derived_parameters: {}\n\n output:\n derived_parameters: <% $.derived_parameters.mergeWith($.get('sriov_parameters', {})) %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_host_cpus:\n workflow: tripleo.derive_params_formulas.v1.get_host_cpus\n input:\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n publish:\n host_cpus: <% task().result.get('host_cpus', '') %>\n on-success: get_sriov_parameters\n on-error: set_status_failed_get_host_cpus\n\n get_sriov_parameters:\n publish:\n # SriovHostCpusList parameter is added temporarily and it's removed later from derived parameters result.\n sriov_parameters: <% dict(concat($.role_name, 'Parameters') => dict('SriovHostCpusList' => $.get('host_cpus', ''))) %>\n\n set_status_failed_get_host_cpus:\n publish:\n status: FAILED\n message: <% task(get_host_cpus).result.get('message', '') %>\n on-success: fail\n\n\n get_host_cpus:\n description: >\n Fetching the host CPU list from the introspection data, and then converting the raw list into a range list.\n\n input:\n - hw_data # introspection data\n\n output:\n host_cpus: <% $.get('host_cpus', '') %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_host_cpus:\n action: tripleo.derive_params.get_host_cpus_list inspect_data=<% $.hw_data %>\n publish:\n host_cpus: <% task().result %>\n on-success:\n - get_host_cpus_range_list: <% $.host_cpus %>\n - set_status_failed_get_host_cpus: <% not $.host_cpus %>\n on-error: set_status_failed_on_error_get_host_cpus\n\n get_host_cpus_range_list:\n action: tripleo.derive_params.convert_number_to_range_list\n input:\n num_list: <% $.host_cpus %>\n publish:\n host_cpus: <% task().result %>\n on-error: set_status_failed_get_host_cpus_range_list\n\n set_status_failed_get_host_cpus:\n publish:\n status: FAILED\n message: 'Unable to determine host cpus'\n on-success: fail\n\n set_status_failed_on_error_get_host_cpus:\n publish:\n status: FAILED\n message: <% task(get_host_cpus).result %>\n on-success: fail\n\n set_status_failed_get_host_cpus_range_list:\n publish:\n status: FAILED\n message: <% task(get_host_cpus_range_list).result %>\n on-success: fail\n\n\n host_derive_params:\n description: >\n This workflow derives parameters for the Host process, and is mainly associated with CPU pinning and huge memory pages.\n This workflow can be dependent on any feature or also can be invoked individually as well.\n\n input:\n - role_name\n - hw_data # introspection data\n - user_inputs\n - derived_parameters: {}\n\n output:\n derived_parameters: <% $.derived_parameters.mergeWith($.get('host_parameters', {})) %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_cpus:\n publish:\n cpus: <% $.hw_data.numa_topology.cpus %>\n on-success:\n - get_role_derive_params: <% $.cpus %>\n - set_status_failed_get_cpus: <% not $.cpus %>\n\n get_role_derive_params:\n publish:\n role_derive_params: <% $.derived_parameters.get(concat($.role_name, 'Parameters'), {}) %>\n # removing the role parameters (eg. ComputeParameters) in derived_parameters dictionary since already copied in role_derive_params.\n derived_parameters: <% $.derived_parameters.delete(concat($.role_name, 'Parameters')) %>\n on-success: get_host_cpus\n\n get_host_cpus:\n publish:\n host_cpus: <% $.role_derive_params.get('OvsDpdkCoreList', '') or $.role_derive_params.get('SriovHostCpusList', '') %>\n # SriovHostCpusList parameter is added temporarily for host_cpus and not needed in derived_parameters result.\n # SriovHostCpusList parameter is deleted in derived_parameters list and adding the updated role parameters\n # back in the derived_parameters.\n derived_parameters: <% $.derived_parameters + dict(concat($.role_name, 'Parameters') => $.role_derive_params.delete('SriovHostCpusList')) %>\n on-success: get_host_dpdk_combined_cpus\n\n get_host_dpdk_combined_cpus:\n publish:\n host_dpdk_combined_cpus: <% let(pmd_cpus => $.role_derive_params.get('OvsPmdCoreList', '')) -> switch($pmd_cpus => concat($pmd_cpus, ',', $.host_cpus), not $pmd_cpus => $.host_cpus) %>\n reserved_cpus: []\n on-success:\n - get_host_dpdk_combined_cpus_num_list: <% $.host_dpdk_combined_cpus %>\n - set_status_failed_get_host_dpdk_combined_cpus: <% not $.host_dpdk_combined_cpus %>\n\n get_host_dpdk_combined_cpus_num_list:\n action: tripleo.derive_params.convert_range_to_number_list\n input:\n range_list: <% $.host_dpdk_combined_cpus %>\n publish:\n host_dpdk_combined_cpus: <% task().result %>\n reserved_cpus: <% task().result.split(',') %>\n on-success: get_nova_cpus\n on-error: set_status_failed_get_host_dpdk_combined_cpus_num_list\n\n get_nova_cpus:\n publish:\n nova_cpus: <% let(reserved_cpus => $.reserved_cpus) -> $.cpus.select($.thread_siblings).flatten().where(not (str($) in $reserved_cpus)).join(',') %>\n on-success:\n - get_isol_cpus: <% $.nova_cpus %>\n - set_status_failed_get_nova_cpus: <% not $.nova_cpus %>\n\n # concatinates OvsPmdCoreList range format and NovaVcpuPinSet in range format. it may not be in perfect range format.\n # example: concatinates '12-15,19' and 16-18' ranges '12-15,19,16-18'\n get_isol_cpus:\n publish:\n isol_cpus: <% let(pmd_cpus => $.role_derive_params.get('OvsPmdCoreList','')) -> switch($pmd_cpus => concat($pmd_cpus, ',', $.nova_cpus), not $pmd_cpus => $.nova_cpus) %>\n on-success: get_isol_cpus_num_list\n\n # Gets the isol_cpus in the number list\n # example: '12-15,19,16-18' into '12,13,14,15,16,17,18,19'\n get_isol_cpus_num_list:\n action: tripleo.derive_params.convert_range_to_number_list\n input:\n range_list: <% $.isol_cpus %>\n publish:\n isol_cpus: <% task().result %>\n on-success: get_nova_cpus_range_list\n on-error: set_status_failed_get_isol_cpus_num_list\n\n get_nova_cpus_range_list:\n action: tripleo.derive_params.convert_number_to_range_list\n input:\n num_list: <% $.nova_cpus %>\n publish:\n nova_cpus: <% task().result %>\n on-success: get_isol_cpus_range_list\n on-error: set_status_failed_get_nova_cpus_range_list\n\n # converts number format isol_cpus into range format\n # example: '12,13,14,15,16,17,18,19' into '12-19'\n get_isol_cpus_range_list:\n action: tripleo.derive_params.convert_number_to_range_list\n input:\n num_list: <% $.isol_cpus %>\n publish:\n isol_cpus: <% task().result %>\n on-success: get_host_mem\n on-error: set_status_failed_get_isol_cpus_range_list\n\n get_host_mem:\n publish:\n host_mem: <% $.user_inputs.get('host_mem_default', 4096) %>\n on-success: check_default_hugepage_supported\n\n check_default_hugepage_supported:\n publish:\n default_hugepage_supported: <% $.hw_data.get('inventory', {}).get('cpu', {}).get('flags', []).contains('pdpe1gb') %>\n on-success:\n - get_total_memory: <% $.default_hugepage_supported %>\n - set_status_failed_check_default_hugepage_supported: <% not $.default_hugepage_supported %>\n\n get_total_memory:\n publish:\n total_memory: <% $.hw_data.get('inventory', {}).get('memory', {}).get('physical_mb', 0) %>\n on-success:\n - get_hugepage_allocation_percentage: <% $.total_memory %>\n - set_status_failed_get_total_memory: <% not $.total_memory %>\n\n get_hugepage_allocation_percentage:\n publish:\n huge_page_allocation_percentage: <% $.user_inputs.get('huge_page_allocation_percentage', 0) %>\n on-success:\n - get_hugepages: <% isInteger($.huge_page_allocation_percentage) and $.huge_page_allocation_percentage > 0 %>\n - set_status_failed_get_hugepage_allocation_percentage_invalid: <% not isInteger($.huge_page_allocation_percentage) %>\n - set_status_failed_get_hugepage_allocation_percentage_not_provided: <% $.huge_page_allocation_percentage = 0 %>\n\n get_hugepages:\n publish:\n hugepages: <% let(huge_page_perc => float($.huge_page_allocation_percentage)/100)-> int((($.total_memory/1024)-4) * $huge_page_perc) %>\n on-success:\n - get_cpu_model: <% $.hugepages %>\n - set_status_failed_get_hugepages: <% not $.hugepages %>\n\n get_cpu_model:\n publish:\n intel_cpu_model: <% $.hw_data.get('inventory', {}).get('cpu', {}).get('model_name', '').startsWith('Intel') %>\n on-success: get_iommu_info\n\n get_iommu_info:\n publish:\n iommu_info: <% switch($.intel_cpu_model => 'intel_iommu=on iommu=pt', not $.intel_cpu_model => '') %>\n on-success: get_kernel_args\n\n get_kernel_args:\n publish:\n kernel_args: <% concat('default_hugepagesz=1GB hugepagesz=1G ', 'hugepages=', str($.hugepages), ' ', $.iommu_info, ' isolcpus=', $.isol_cpus) %>\n on-success: get_host_parameters\n\n get_host_parameters:\n publish:\n host_parameters: <% dict(concat($.role_name, 'Parameters') => dict('NovaVcpuPinSet' => $.get('nova_cpus', ''), 'NovaReservedHostMemory' => $.get('host_mem', ''), 'KernelArgs' => $.get('kernel_args', ''), 'IsolCpusList' => $.get('isol_cpus', ''))) %>\n\n set_status_failed_get_cpus:\n publish:\n status: FAILED\n message: \"Unable to determine CPU's on NUMA nodes\"\n on-success: fail\n\n set_status_failed_get_host_dpdk_combined_cpus:\n publish:\n status: FAILED\n message: 'Unable to combine host and dpdk cpus list'\n on-success: fail\n\n set_status_failed_get_host_dpdk_combined_cpus_num_list:\n publish:\n status: FAILED\n message: <% task(get_host_dpdk_combined_cpus_num_list).result %>\n on-success: fail\n\n set_status_failed_get_nova_cpus:\n publish:\n status: FAILED\n message: 'Unable to determine nova vcpu pin set'\n on-success: fail\n\n set_status_failed_get_nova_cpus_range_list:\n publish:\n status: FAILED\n message: <% task(get_nova_cpus_range_list).result %>\n on-success: fail\n\n set_status_failed_get_isol_cpus_num_list:\n publish:\n status: FAILED\n message: <% task(get_isol_cpus_num_list).result %>\n on-success: fail\n\n set_status_failed_get_isol_cpus_range_list:\n publish:\n status: FAILED\n message: <% task(get_isol_cpus_range_list).result %>\n on-success: fail\n\n set_status_failed_check_default_hugepage_supported:\n publish:\n status: FAILED\n message: 'default huge page size 1GB is not supported'\n on-success: fail\n\n set_status_failed_get_total_memory:\n publish:\n status: FAILED\n message: 'Unable to determine total memory'\n on-success: fail\n\n set_status_failed_get_hugepage_allocation_percentage_invalid:\n publish:\n status: FAILED\n message: <% \"huge_page_allocation_percentage user input '{0}' is invalid\".format($.huge_page_allocation_percentage) %>\n on-success: fail\n\n set_status_failed_get_hugepage_allocation_percentage_not_provided:\n publish:\n status: FAILED\n message: 'huge_page_allocation_percentage user input is not provided'\n on-success: fail\n\n set_status_failed_get_hugepages:\n publish:\n status: FAILED\n message: 'Unable to determine huge pages'\n on-success: fail\n\n\n hci_derive_params:\n description: Derive the deployment parameters for HCI\n input:\n - role_name\n - environment_parameters\n - heat_resource_tree\n - introspection_data\n - user_inputs\n - derived_parameters: {}\n\n output:\n derived_parameters: <% $.derived_parameters.mergeWith($.get('hci_parameters', {})) %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_hci_inputs:\n publish:\n hci_profile: <% $.user_inputs.get('hci_profile', '') %>\n hci_profile_config: <% $.user_inputs.get('hci_profile_config', {}) %>\n MB_PER_GB: 1024\n on-success:\n - get_average_guest_memory_size_in_mb: <% $.hci_profile and $.hci_profile_config.get($.hci_profile, {}) %>\n - set_failed_invalid_hci_profile: <% $.hci_profile and not $.hci_profile_config.get($.hci_profile, {}) %>\n # When no hci_profile is specified, the workflow terminates without deriving any HCI parameters.\n\n get_average_guest_memory_size_in_mb:\n publish:\n average_guest_memory_size_in_mb: <% $.hci_profile_config.get($.hci_profile, {}).get('average_guest_memory_size_in_mb', 0) %>\n on-success:\n - get_average_guest_cpu_utilization_percentage: <% isInteger($.average_guest_memory_size_in_mb) %>\n - set_failed_invalid_average_guest_memory_size_in_mb: <% not isInteger($.average_guest_memory_size_in_mb) %>\n\n get_average_guest_cpu_utilization_percentage:\n publish:\n average_guest_cpu_utilization_percentage: <% $.hci_profile_config.get($.hci_profile, {}).get('average_guest_cpu_utilization_percentage', 0) %>\n on-success:\n - get_gb_overhead_per_guest: <% isInteger($.average_guest_cpu_utilization_percentage) %>\n - set_failed_invalid_average_guest_cpu_utilization_percentage: <% not isInteger($.average_guest_cpu_utilization_percentage) %>\n\n get_gb_overhead_per_guest:\n publish:\n gb_overhead_per_guest: <% $.user_inputs.get('gb_overhead_per_guest', 0.5) %>\n on-success:\n - get_gb_per_osd: <% isNumber($.gb_overhead_per_guest) %>\n - set_failed_invalid_gb_overhead_per_guest: <% not isNumber($.gb_overhead_per_guest) %>\n\n get_gb_per_osd:\n publish:\n gb_per_osd: <% $.user_inputs.get('gb_per_osd', 3) %>\n on-success:\n - get_cores_per_osd: <% isNumber($.gb_per_osd) %>\n - set_failed_invalid_gb_per_osd: <% not isNumber($.gb_per_osd) %>\n\n get_cores_per_osd:\n publish:\n cores_per_osd: <% $.user_inputs.get('cores_per_osd', 1.0) %>\n on-success:\n - get_extra_configs: <% isNumber($.cores_per_osd) %>\n - set_failed_invalid_cores_per_osd: <% not isNumber($.cores_per_osd) %>\n\n get_extra_configs:\n publish:\n extra_config: <% $.environment_parameters.get('ExtraConfig', {}) %>\n role_extra_config: <% $.environment_parameters.get(concat($.role_name, 'ExtraConfig'), {}) %>\n role_env_params: <% $.environment_parameters.get(concat($.role_name, 'Parameters'), {}) %>\n role_derive_params: <% $.derived_parameters.get(concat($.role_name, 'Parameters'), {}) %>\n on-success: get_num_osds\n\n get_num_osds:\n publish:\n num_osds: <% $.heat_resource_tree.parameters.get('CephAnsibleDisksConfig', {}).get('default', {}).get('devices', []).count() %>\n on-success:\n - get_memory_mb: <% $.num_osds %>\n # If there's no CephAnsibleDisksConfig then look for OSD configuration in hiera data\n - get_num_osds_from_hiera: <% not $.num_osds %>\n\n get_num_osds_from_hiera:\n publish:\n num_osds: <% $.role_extra_config.get('ceph::profile::params::osds', $.extra_config.get('ceph::profile::params::osds', {})).keys().count() %>\n on-success:\n - get_memory_mb: <% $.num_osds %>\n - set_failed_no_osds: <% not $.num_osds %>\n\n get_memory_mb:\n publish:\n memory_mb: <% $.introspection_data.get('memory_mb', 0) %>\n on-success:\n - get_nova_vcpu_pin_set: <% $.memory_mb %>\n - set_failed_get_memory_mb: <% not $.memory_mb %>\n\n # Determine the number of CPU cores available to Nova and Ceph. If\n # NovaVcpuPinSet is defined then use the number of vCPUs in the set,\n # otherwise use all of the cores identified in the introspection data.\n\n get_nova_vcpu_pin_set:\n publish:\n # NovaVcpuPinSet can be defined in multiple locations, and it's\n # important to select the value in order of precedence:\n #\n # 1) User specified value for this role\n # 2) User specified default value for all roles\n # 3) Value derived by another derived parameters workflow\n nova_vcpu_pin_set: <% $.role_env_params.get('NovaVcpuPinSet', $.environment_parameters.get('NovaVcpuPinSet', $.role_derive_params.get('NovaVcpuPinSet', ''))) %>\n on-success:\n - get_nova_vcpu_count: <% $.nova_vcpu_pin_set %>\n - get_num_cores: <% not $.nova_vcpu_pin_set %>\n\n get_nova_vcpu_count:\n action: tripleo.derive_params.convert_range_to_number_list\n input:\n range_list: <% $.nova_vcpu_pin_set %>\n publish:\n num_cores: <% task().result.split(',').count() %>\n on-success: calculate_nova_parameters\n on-error: set_failed_get_nova_vcpu_count\n\n get_num_cores:\n publish:\n num_cores: <% $.introspection_data.get('cpus', 0) %>\n on-success:\n - calculate_nova_parameters: <% $.num_cores %>\n - set_failed_get_num_cores: <% not $.num_cores %>\n\n # HCI calculations are broken into multiple steps. This is necessary\n # because variables published by a Mistral task are not available\n # for use by that same task. Variables computed and published in a task\n # are only available in subsequent tasks.\n #\n # The HCI calculations compute two Nova parameters:\n # - reserved_host_memory\n # - cpu_allocation_ratio\n #\n # The reserved_host_memory calculation computes the amount of memory\n # that needs to be reserved for Ceph and the total amount of \"guest\n # overhead\" memory that is based on the anticipated number of guests.\n # Psuedo-code for the calculation (disregarding MB and GB units) is\n # as follows:\n #\n # ceph_memory = mem_per_osd * num_osds\n # nova_memory = total_memory - ceph_memory\n # num_guests = nova_memory /\n # (average_guest_memory_size + overhead_per_guest)\n # reserved_memory = ceph_memory + (num_guests * overhead_per_guest)\n #\n # The cpu_allocation_ratio calculation is similar in that it takes into\n # account the number of cores that must be reserved for Ceph.\n #\n # ceph_cores = cores_per_osd * num_osds\n # guest_cores = num_cores - ceph_cores\n # guest_vcpus = guest_cores / average_guest_utilization\n # cpu_allocation_ratio = guest_vcpus / num_cores\n\n calculate_nova_parameters:\n publish:\n avg_guest_util: <% $.average_guest_cpu_utilization_percentage / 100.0 %>\n avg_guest_size_gb: <% $.average_guest_memory_size_in_mb / float($.MB_PER_GB) %>\n memory_gb: <% $.memory_mb / float($.MB_PER_GB) %>\n ceph_mem_gb: <% $.gb_per_osd * $.num_osds %>\n nonceph_cores: <% $.num_cores - int($.cores_per_osd * $.num_osds) %>\n on-success: calc_step_2\n\n calc_step_2:\n publish:\n num_guests: <% int(($.memory_gb - $.ceph_mem_gb) / ($.avg_guest_size_gb + $.gb_overhead_per_guest)) %>\n guest_vcpus: <% $.nonceph_cores / $.avg_guest_util %>\n on-success: calc_step_3\n\n calc_step_3:\n publish:\n reserved_host_memory: <% $.MB_PER_GB * int($.ceph_mem_gb + ($.num_guests * $.gb_overhead_per_guest)) %>\n cpu_allocation_ratio: <% $.guest_vcpus / $.num_cores %>\n on-success: validate_results\n\n validate_results:\n publish:\n # Verify whether HCI is viable:\n # - At least 80% of the memory is reserved for Ceph and guest overhead\n # - At least half of the CPU cores must be available to Nova\n mem_ok: <% $.reserved_host_memory <= ($.memory_mb * 0.8) %>\n cpu_ok: <% $.cpu_allocation_ratio >= 0.5 %>\n on-success:\n - set_failed_insufficient_mem: <% not $.mem_ok %>\n - set_failed_insufficient_cpu: <% not $.cpu_ok %>\n - publish_hci_parameters: <% $.mem_ok and $.cpu_ok %>\n\n publish_hci_parameters:\n publish:\n # TODO(abishop): Update this when the cpu_allocation_ratio can be set\n # via a THT parameter (no such parameter currently exists). Until a\n # THT parameter exists, use hiera data to set the cpu_allocation_ratio.\n hci_parameters: <% dict(concat($.role_name, 'Parameters') => dict('NovaReservedHostMemory' => $.reserved_host_memory)) + dict(concat($.role_name, 'ExtraConfig') => dict('nova::cpu_allocation_ratio' => $.cpu_allocation_ratio)) %>\n\n set_failed_invalid_hci_profile:\n publish:\n message: \"'<% $.hci_profile %>' is not a valid HCI profile.\"\n on-success: fail\n\n set_failed_invalid_average_guest_memory_size_in_mb:\n publish:\n message: \"'<% $.average_guest_memory_size_in_mb %>' is not a valid average_guest_memory_size_in_mb value.\"\n on-success: fail\n\n set_failed_invalid_gb_overhead_per_guest:\n publish:\n message: \"'<% $.gb_overhead_per_guest %>' is not a valid gb_overhead_per_guest value.\"\n on-success: fail\n\n set_failed_invalid_gb_per_osd:\n publish:\n message: \"'<% $.gb_per_osd %>' is not a valid gb_per_osd value.\"\n on-success: fail\n\n set_failed_invalid_cores_per_osd:\n publish:\n message: \"'<% $.cores_per_osd %>' is not a valid cores_per_osd value.\"\n on-success: fail\n\n set_failed_invalid_average_guest_cpu_utilization_percentage:\n publish:\n message: \"'<% $.average_guest_cpu_utilization_percentage %>' is not a valid average_guest_cpu_utilization_percentage value.\"\n on-success: fail\n\n set_failed_no_osds:\n publish:\n message: \"No Ceph OSDs found in the overcloud definition ('ceph::profile::params::osds').\"\n on-success: fail\n\n set_failed_get_memory_mb:\n publish:\n message: \"Unable to determine the amount of physical memory (no 'memory_mb' found in introspection_data).\"\n on-success: fail\n\n set_failed_get_nova_vcpu_count:\n publish:\n message: <% task(get_nova_vcpu_count).result %>\n on-success: fail\n\n set_failed_get_num_cores:\n publish:\n message: \"Unable to determine the number of CPU cores (no 'cpus' found in introspection_data).\"\n on-success: fail\n\n set_failed_insufficient_mem:\n publish:\n message: \"<% $.memory_mb %> MB is not enough memory to run hyperconverged.\"\n on-success: fail\n\n set_failed_insufficient_cpu:\n publish:\n message: \"<% $.num_cores %> CPU cores are not enough to run hyperconverged.\"\n on-success: fail\n", "name": "tripleo.derive_params_formulas.v1", "tags": [], "created_at": "2018-06-26 04:26:37", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "f2b50db0-7a86-4b76-a01f-84318350cb38"}, {"definition": "---\nversion: '2.0'\nname: tripleo.plan_management.v1\ndescription: TripleO Overcloud Deployment Workflows v1\n\nworkflows:\n\n create_default_deployment_plan:\n description: >\n This workflow exists to maintain backwards compatibility in pike. This\n workflow will likely be removed in queens in favor of create_deployment_plan.\n input:\n - container\n - queue_name: tripleo\n - generate_passwords: true\n tags:\n - tripleo-common-managed\n tasks:\n call_create_deployment_plan:\n workflow: tripleo.plan_management.v1.create_deployment_plan\n on-success: set_status_success\n on-error: call_create_deployment_plan_set_status_failed\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n generate_passwords: <% $.generate_passwords %>\n use_default_templates: true\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(call_create_deployment_plan).result %>\n\n call_create_deployment_plan_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(call_create_deployment_plan).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.create_default_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n create_deployment_plan:\n description: >\n This workflow provides the capability to create a deployment plan using\n the default heat templates provided in a standard TripleO undercloud\n deployment, heat templates contained in an external git repository, or a\n swift container that already contains templates.\n input:\n - container\n - source_url: null\n - queue_name: tripleo\n - generate_passwords: true\n - use_default_templates: false\n\n tags:\n - tripleo-common-managed\n\n tasks:\n container_required_check:\n description: >\n If using the default templates or importing templates from a git\n repository, a new container needs to be created. If using an existing\n container containing templates, skip straight to create_plan.\n on-success:\n - verify_container_doesnt_exist: <% $.use_default_templates or $.source_url %>\n - create_plan: <% $.use_default_templates = false and $.source_url = null %>\n\n verify_container_doesnt_exist:\n action: swift.head_container container=<% $.container %>\n on-success: notify_zaqar\n on-error: create_container\n publish:\n status: FAILED\n message: \"Unable to create plan. The Swift container already exists\"\n\n create_container:\n action: tripleo.plan.create_container container=<% $.container %>\n on-success: templates_source_check\n on-error: create_container_set_status_failed\n\n cleanup_temporary_files:\n action: tripleo.git.clean container=<% $.container %>\n\n templates_source_check:\n on-success:\n - upload_default_templates: <% $.use_default_templates = true %>\n - clone_git_repo: <% $.source_url != null %>\n\n clone_git_repo:\n action: tripleo.git.clone container=<% $.container %> url=<% $.source_url %>\n on-success: upload_templates_directory\n on-error: clone_git_repo_set_status_failed\n\n upload_templates_directory:\n action: tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>\n on-success: create_plan\n on-complete: cleanup_temporary_files\n on-error: upload_templates_directory_set_status_failed\n\n upload_default_templates:\n action: tripleo.templates.upload container=<% $.container %>\n on-success: create_plan\n on-error: upload_to_container_set_status_failed\n\n create_plan:\n on-success:\n - ensure_passwords_exist: <% $.generate_passwords = true %>\n - add_root_stack_name: <% $.generate_passwords != true %>\n\n ensure_passwords_exist:\n action: tripleo.parameters.generate_passwords container=<% $.container %>\n on-success: add_root_stack_name\n on-error: ensure_passwords_exist_set_status_failed\n\n add_root_stack_name:\n action: tripleo.parameters.update\n input:\n container: <% $.container %>\n parameters:\n RootStackName: <% $.container %>\n on-success: container_images_prepare\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n container_images_prepare:\n description: >\n Populate all container image parameters with default values.\n action: tripleo.container_images.prepare container=<% $.container %>\n on-success: process_templates\n on-error: container_images_prepare_set_status_failed\n\n process_templates:\n action: tripleo.templates.process container=<% $.container %>\n on-success: set_status_success\n on-error: process_templates_set_status_failed\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: 'Plan created.'\n\n create_container_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_container).result %>\n\n clone_git_repo_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(clone_git_repo).result %>\n\n upload_templates_directory_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(upload_templates_directory).result %>\n\n upload_to_container_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(upload_default_templates).result %>\n\n ensure_passwords_exist_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(ensure_passwords_exist).result %>\n\n process_templates_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(process_templates).result %>\n\n container_images_prepare_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(container_images_prepare).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.create_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n update_deployment_plan:\n input:\n - container\n - source_url: null\n - queue_name: tripleo\n - generate_passwords: true\n - plan_environment: null\n tags:\n - tripleo-common-managed\n tasks:\n templates_source_check:\n on-success:\n - update_plan: <% $.source_url = null %>\n - clone_git_repo: <% $.source_url != null %>\n\n clone_git_repo:\n action: tripleo.git.clone container=<% $.container %> url=<% $.source_url %>\n on-success: upload_templates_directory\n on-error: clone_git_repo_set_status_failed\n\n upload_templates_directory:\n action: tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>\n on-success: create_swift_rings_backup_plan\n on-complete: cleanup_temporary_files\n on-error: upload_templates_directory_set_status_failed\n\n cleanup_temporary_files:\n action: tripleo.git.clean container=<% $.container %>\n\n create_swift_rings_backup_plan:\n workflow: tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan\n on-success: update_plan\n on-error: create_swift_rings_backup_plan_set_status_failed\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n use_default_templates: true\n\n update_plan:\n on-success:\n - ensure_passwords_exist: <% $.generate_passwords = true %>\n - container_images_prepare: <% $.generate_passwords != true %>\n\n ensure_passwords_exist:\n action: tripleo.parameters.generate_passwords container=<% $.container %>\n on-success: container_images_prepare\n on-error: ensure_passwords_exist_set_status_failed\n\n container_images_prepare:\n description: >\n Populate all container image parameters with default values.\n action: tripleo.container_images.prepare container=<% $.container %>\n on-success: process_templates\n on-error: container_images_prepare_set_status_failed\n\n process_templates:\n action: tripleo.templates.process container=<% $.container %>\n on-success:\n - set_status_success: <% $.plan_environment = null %>\n - upload_plan_environment: <% $.plan_environment != null %>\n on-error: process_templates_set_status_failed\n\n upload_plan_environment:\n action: tripleo.templates.upload_plan_environment container=<% $.container %> plan_environment=<% $.plan_environment %>\n on-success: set_status_success\n on-error: process_templates_set_status_failed\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: 'Plan updated.'\n\n create_swift_rings_backup_plan_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_swift_rings_backup_plan).result %>\n\n clone_git_repo_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(clone_git_repo).result %>\n\n upload_templates_directory_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(upload_templates_directory).result %>\n\n process_templates_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(process_templates).result %>\n\n ensure_passwords_exist_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(ensure_passwords_exist).result %>\n\n container_images_prepare_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(container_images_prepare).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.update_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n delete_deployment_plan:\n description: >\n Deletes a plan by deleting the container matching plan_name. It will\n not delete the plan if a stack exists with the same name.\n\n tags:\n - tripleo-common-managed\n\n input:\n - container: overcloud\n - queue_name: tripleo\n\n tasks:\n delete_plan:\n action: tripleo.plan.delete container=<% $.container %>\n on-complete: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n publish:\n status: SUCCESS\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.delete_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n\n get_passwords:\n description: Retrieves passwords for a given plan\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n verify_container_exists:\n action: swift.head_container container=<% $.container %>\n on-success: get_environment_passwords\n on-error: verify_container_set_status_failed\n\n get_environment_passwords:\n action: tripleo.parameters.get_passwords container=<% $.container %>\n on-success: get_passwords_set_status_success\n on-error: get_passwords_set_status_failed\n\n get_passwords_set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(get_environment_passwords).result %>\n\n get_passwords_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(get_environment_passwords).result %>\n\n verify_container_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(verify_container_exists).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.get_passwords\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n export_deployment_plan:\n description: Creates an export tarball for a given plan\n input:\n - plan\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n export_plan:\n action: tripleo.plan.export\n input:\n plan: <% $.plan %>\n delete_after: 3600\n exports_container: \"plan-exports\"\n on-success: create_tempurl\n on-error: export_plan_set_status_failed\n\n create_tempurl:\n action: tripleo.swift.tempurl\n on-success: set_status_success\n on-error: create_tempurl_set_status_failed\n input:\n container: \"plan-exports\"\n obj: \"<% $.plan %>.tar.gz\"\n valid: 3600\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(create_tempurl).result %>\n tempurl: <% task(create_tempurl).result %>\n\n export_plan_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(export_plan).result %>\n\n create_tempurl_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_tempurl).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.export_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n tempurl: <% $.get('tempurl', '') %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n get_deprecated_parameters:\n description: Gets the list of deprecated parameters in the whole of the plan including nested stack\n input:\n - container: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_flatten_data:\n action: tripleo.parameters.get_flatten container=<% $.container %>\n on-success: get_deprecated_params\n on-error: set_status_failed_get_flatten_data\n publish:\n user_params: <% task().result.environment_parameters %>\n plan_params: <% task().result.heat_resource_tree.parameters.keys() %>\n parameter_groups: <% task().result.heat_resource_tree.resources.values().where( $.get('parameter_groups') ).select($.parameter_groups).flatten() %>\n\n get_deprecated_params:\n on-success: check_if_user_param_has_deprecated\n publish:\n deprecated_params: <% $.parameter_groups.where($.get('label') = 'deprecated').select($.parameters).flatten().distinct() %>\n\n check_if_user_param_has_deprecated:\n on-success: get_unused_params\n publish:\n deprecated_result: <% let(up => $.user_params) -> $.deprecated_params.select( dict('parameter' => $, 'deprecated' => true, 'user_defined' => $up.keys().contains($)) ) %>\n\n # Get the list of parameters, which are defined by user via environment files's parameter_default, but not part of the plan definition\n # It may be possible that the parameter will be used by a service, but the service is not part of the plan.\n # In such cases, the parameter will be reported as unused, care should be take to understand whether it is really unused or not.\n get_unused_params:\n on-success: send_message\n publish:\n unused_params: <% let(plan_params => $.plan_params) -> $.user_params.keys().where( not $plan_params.contains($) ) %>\n\n set_status_failed_get_flatten_data:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_flatten_data).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.get_deprecated_parameters\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n deprecated: <% $.get('deprecated_result', []) %>\n unused: <% $.get('unused_params', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n publish_ui_logs_to_swift:\n description: >\n This workflow drains a zaqar queue, and publish its messages into a log\n file in swift. This workflow is called by cron trigger.\n\n input:\n - logging_queue_name: tripleo-ui-logging\n - logging_container: tripleo-ui-logs\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n # We're using a NoOp action to start the workflow. The recursive nature\n # of the workflow means that Mistral will refuse to execute it because it\n # doesn't know where to begin.\n start:\n on-success: get_messages\n\n get_messages:\n action: zaqar.claim_messages\n on-success:\n - format_messages: <% task().result.len() > 0 %>\n input:\n queue_name: <% $.logging_queue_name %>\n ttl: 60\n grace: 60\n publish:\n status: SUCCESS\n messages: <% task().result %>\n message_ids: <% task().result.select($._id) %>\n\n format_messages:\n action: tripleo.logging_to_swift.format_messages\n on-success: upload_to_swift\n input:\n messages: <% $.messages %>\n publish:\n status: SUCCESS\n formatted_messages: <% task().result %>\n\n upload_to_swift:\n action: tripleo.logging_to_swift.publish_ui_log_to_swift\n on-success: delete_messages\n input:\n logging_data: <% $.formatted_messages %>\n logging_container: <% $.logging_container %>\n publish:\n status: SUCCESS\n\n delete_messages:\n action: zaqar.delete_messages\n on-success: get_messages\n input:\n queue_name: <% $.logging_queue_name %>\n messages: <% $.message_ids %>\n publish:\n status: SUCCESS\n\n download_logs:\n description: Creates a tarball with logging data\n input:\n - queue_name: tripleo\n - logging_container: \"tripleo-ui-logs\"\n - downloads_container: \"tripleo-ui-logs-downloads\"\n - delete_after: 3600\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n publish_logs:\n workflow: tripleo.plan_management.v1.publish_ui_logs_to_swift\n on-success: prepare_log_download\n on-error: publish_logs_set_status_failed\n\n prepare_log_download:\n action: tripleo.logging_to_swift.prepare_log_download\n input:\n logging_container: <% $.logging_container %>\n downloads_container: <% $.downloads_container %>\n delete_after: <% $.delete_after %>\n on-success: create_tempurl\n on-error: download_logs_set_status_failed\n publish:\n filename: <% task().result %>\n\n create_tempurl:\n action: tripleo.swift.tempurl\n on-success: set_status_success\n on-error: create_tempurl_set_status_failed\n input:\n container: <% $.downloads_container %>\n obj: <% $.filename %>\n valid: 3600\n publish:\n tempurl: <% task().result %>\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(create_tempurl).result %>\n tempurl: <% task(create_tempurl).result %>\n\n publish_logs_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(publish_logs).result %>\n\n download_logs_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(prepare_log_download).result %>\n\n create_tempurl_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_tempurl).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.download_logs\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n tempurl: <% $.get('tempurl', '') %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n list_roles:\n description: Retrieve the roles_data.yaml and return a usable object\n\n input:\n - container: overcloud\n - roles_data_file: 'roles_data.yaml'\n - queue_name: tripleo\n\n output:\n roles_data: <% $.roles_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_roles_data:\n action: swift.get_object\n input:\n container: <% $.container %>\n obj: <% $.roles_data_file %>\n publish:\n roles_data: <% yaml_parse(task().result.last()) %>\n status: SUCCESS\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.list_roles\n payload:\n status: <% $.status %>\n roles_data: <% $.get('roles_data', {}) %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n list_available_networks:\n input:\n - container\n - queue_name: tripleo\n\n output:\n available_networks: <% $.available_networks %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_network_file_names:\n action: swift.get_container\n input:\n container: <% $.container %>\n publish:\n network_names: <% task().result[1].where($.name.startsWith('networks/')).where($.name.endsWith('.yaml')).name %>\n on-success: get_network_files\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_network_files:\n with-items: network_name in <% $.network_names %>\n action: swift.get_object\n on-success: transform_output\n on-error: notify_zaqar\n input:\n container: <% $.container %>\n obj: <% $.network_name %>\n publish:\n status: SUCCESS\n available_yaml_networks: <% task().result.select($[1]) %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n transform_output:\n publish:\n status: SUCCESS\n available_networks: <% yaml_parse($.available_yaml_networks.join(\"\\n\")) %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-complete: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.list_available_networks\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n available_networks: <% $.get('available_networks', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n list_networks:\n input:\n - container: 'overcloud'\n - network_data_file: 'network_data.yaml'\n - queue_name: tripleo\n\n output:\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_networks:\n action: swift.get_object\n input:\n container: <% $.container %>\n obj: <% $.network_data_file %>\n on-success: notify_zaqar\n publish:\n network_data: <% yaml_parse(task().result.last()) %>\n status: SUCCESS\n message: <% task().result %>\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.list_networks\n payload:\n status: <% $.status %>\n network_data: <% $.get('network_data', {}) %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n validate_network_files:\n description: Validate network files exist\n input:\n - container: overcloud\n - network_data\n - queue_name: tripleo\n\n output:\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_network_names:\n publish:\n network_names_lower: <% $.network_data.where($.containsKey('name_lower')).name_lower %>\n network_names: <% $.network_data.where(not $.containsKey('name_lower')).name %>\n on-success: validate_networks\n\n validate_networks:\n with-items: network in <% $.network_names_lower.concat($.network_names) %>\n action: swift.head_object\n input:\n container: <% $.container %>\n obj: network/<% $.network.toLower() %>.yaml\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.validate_network_files\n payload:\n status: <% $.status %>\n message: <% $.message %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n validate_networks:\n description: Validate network files were generated properly and exist\n input:\n - container: 'overcloud'\n - network_data_file: 'network_data.yaml'\n - queue_name: tripleo\n\n output:\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_network_data:\n workflow: list_networks\n input:\n container: <% $.container %>\n network_data_file: <% $.network_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().result.network_data %>\n on-success: validate_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error:\n notify_zaqar\n\n validate_networks:\n workflow: validate_network_files\n input:\n container: <% $.container %>\n network_data: <% $.network_data %>\n queue_name: <% $.queue_name %>\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.validate_networks\n payload:\n status: <% $.status %>\n network_data: <% $.get('network_data', {}) %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n validate_roles:\n description: Vaildate roles data exists and is parsable\n\n input:\n - container: overcloud\n - roles_data_file: 'roles_data.yaml'\n - queue_name: tripleo\n\n output:\n roles_data: <% $.roles_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_roles_data:\n workflow: list_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n roles_data: <% task().result.roles_data %>\n status: SUCCESS\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error:\n notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.validate_networks\n payload:\n status: <% $.status %>\n roles_data: <% $.get('roles_data', '') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n _validate_networks_from_roles:\n description: Internal workflow for validating a network exists from a role\n\n input:\n - container: overcloud\n - defined_networks\n - networks_in_roles\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n validate_network_in_network_data:\n publish:\n networks_found: <% $.networks_in_roles.toSet().intersect($.defined_networks.toSet()) %>\n networks_not_found: <% $.networks_in_roles.toSet().difference($.defined_networks.toSet()) %>\n on-success:\n - network_not_found: <% $.networks_not_found %>\n - notify_zaqar: <% not $.networks_not_found %>\n\n network_not_found:\n publish:\n message: <% \"Some networks in roles are not defined, {0}\".format($.networks_not_found.join(', ')) %>\n status: FAILED\n on-success: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1._validate_networks_from_role\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n validate_roles_and_networks:\n description: Vaidate that roles and network data are valid\n\n input:\n - container: overcloud\n - roles_data_file: 'roles_data.yaml'\n - network_data_file: 'network_data.yaml'\n - queue_name: tripleo\n\n output:\n roles_data: <% $.roles_data %>\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n validate_network_data:\n workflow: validate_networks\n input:\n container: <% $.container %>\n network_data_file: <% $.network_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().result.network_data %>\n on-success: validate_roles_data\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n validate_roles_data:\n workflow: validate_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n roles_data: <% task().result.roles_data %>\n role_networks_data: <% task().result.roles_data.networks %>\n networks_in_roles: <% task().result.roles_data.networks.flatten().distinct() %>\n on-success: validate_roles_and_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n validate_roles_and_networks:\n workflow: _validate_networks_from_roles\n input:\n container: <% $.container %>\n defined_networks: <% $.network_data.name %>\n networks_in_roles: <% $.networks_in_roles %>\n queue_name: <% $.queue_name %>\n publish:\n status: SUCCESS\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result.message %>\n on-error: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.validate_roles_and_networks\n payload:\n status: <% $.status %>\n roles_data: <% $.get('roles_data', {}) %>\n network_data: <% $.get('network_data', {}) %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n list_available_roles:\n input:\n - container: overcloud\n - queue_name: tripleo\n\n output:\n available_roles: <% $.available_roles %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_role_file_names:\n action: swift.get_container\n input:\n container: <% $.container %>\n publish:\n role_names: <% task().result[1].where($.name.startsWith('roles/')).where($.name.endsWith('.yaml')).name %>\n on-success: get_role_files\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_role_files:\n with-items: role_name in <% $.role_names %>\n action: swift.get_object\n on-success: transform_output\n on-error: notify_zaqar\n input:\n container: <% $.container %>\n obj: <% $.role_name %>\n publish:\n status: SUCCESS\n available_yaml_roles: <% task().result.select($[1]) %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n transform_output:\n publish:\n status: SUCCESS\n available_roles: <% yaml_parse($.available_yaml_roles.join(\"\\n\")) %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-complete: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.list_available_roles\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n available_roles: <% $.get('available_roles', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n update_roles:\n description: >\n takes data in json format validates its contents and persists them in\n roles_data.yaml, after successful update, templates are regenerated.\n input:\n - container\n - roles\n - roles_data_file: 'roles_data.yaml'\n - replace_all: false\n - queue_name: tripleo\n tags:\n - tripleo-common-managed\n tasks:\n get_available_roles:\n workflow: list_available_roles\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name%>\n publish:\n available_roles: <% task().result.available_roles %>\n on-success: validate_input\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n validate_input:\n description: >\n validate the format of input (verify that each role in input has the\n required attributes set. check README in roles directory in t-h-t),\n validate that roles in input exist in roles directory in t-h-t\n action: tripleo.plan.validate_roles\n input:\n container: <% $.container %>\n roles: <% $.roles %>\n available_roles: <% $.available_roles %>\n on-success: get_network_data\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_network_data:\n workflow: list_networks\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().result.network_data %>\n on-success: validate_network_names\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n validate_network_names:\n description: >\n validate that Network names assigned to Role exist in\n network-data.yaml object in Swift container\n workflow: _validate_networks_from_roles\n input:\n container: <% $.container %>\n defined_networks: <% $.network_data.name %>\n networks_in_roles: <% $.roles.networks.flatten().distinct() %>\n queue_name: <% $.queue_name %>\n on-success: get_current_roles\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result.message %>\n\n get_current_roles:\n workflow: list_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n current_roles: <% task().result.roles_data %>\n on-success: update_roles_data\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n update_roles_data:\n description: >\n update roles_data.yaml object in Swift with roles from workflow input\n action: tripleo.plan.update_roles\n input:\n container: <% $.container %>\n roles: <% $.roles %>\n current_roles: <% $.current_roles %>\n replace_all: <% $.replace_all %>\n publish:\n updated_roles_data: <% task().result.roles %>\n on-success: update_roles_data_in_swift\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n update_roles_data_in_swift:\n description: >\n update roles_data.yaml object in Swift with data from workflow input\n action: swift.put_object\n input:\n container: <% $.container %>\n obj: <% $.roles_data_file %>\n contents: <% yaml_dump($.updated_roles_data) %>\n on-success: regenerate_templates\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n regenerate_templates:\n action: tripleo.templates.process container=<% $.container %>\n on-success: get_updated_roles\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_updated_roles:\n workflow: list_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n publish:\n updated_roles: <% task().result.roles_data %>\n status: SUCCESS\n on-complete: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.roles.v1.update_roles\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n updated_roles: <% $.get('updated_roles', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n select_roles:\n description: >\n takes a list of role names as input and populates roles_data.yaml in\n container in Swift with respective roles from 'roles directory'\n input:\n - container\n - role_names\n - roles_data_file: 'roles_data.yaml'\n - replace_all: true\n - queue_name: tripleo\n tags:\n - tripleo-common-managed\n tasks:\n\n get_available_roles:\n workflow: list_available_roles\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n publish:\n available_roles: <% task().result.available_roles %>\n on-success: get_current_roles\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_current_roles:\n workflow: list_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n current_roles: <% task().result.roles_data %>\n on-success: gather_roles\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n gather_roles:\n description: >\n for each role name from the input, check if it exists in\n roles_data.yaml, if yes, use that role definition, if not, get the\n role definition from roles directory. Use the gathered roles\n definitions as input to updateRolesWorkflow - this ensures\n configuration of the roles which are already in roles_data.yaml\n will not get overridden by data from roles directory\n action: tripleo.plan.gather_roles\n input:\n role_names: <% $.role_names %>\n current_roles: <% $.current_roles %>\n available_roles: <% $.available_roles %>\n publish:\n gathered_roles: <% task().result.gathered_roles %>\n on-success: call_update_roles_workflow\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n call_update_roles_workflow:\n workflow: update_roles\n input:\n container: <% $.container %>\n roles: <% $.gathered_roles %>\n roles_data_file: <% $.roles_data_file %>\n replace_all: <% $.replace_all %>\n queue_name: <% $.queue_name %>\n on-complete: notify_zaqar\n publish:\n selected_roles: <% task().result.updated_roles %>\n status: SUCCESS\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.select_roles\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n selected_roles: <% $.get('selected_roles', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1", "tags": [], "created_at": "2018-06-26 04:26:39", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "b46d9615-d893-44e9-ba9b-c30925008d15"}, {"definition": "---\nversion: '2.0'\nname: tripleo.support.v1\ndescription: TripleO support workflows\n\nworkflows:\n\n collect_logs:\n description: >\n This workflow runs sosreport on the servers where their names match the\n provided server_name input. The logs are stored in the provided sos_dir.\n input:\n - server_name\n - sos_dir: /var/tmp/tripleo-sos\n - sos_options: boot,cluster,hardware,kernel,memory,nfs,openstack,packagemanager,performance,services,storage,system,webserver,virt\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n collect_logs_on_servers:\n workflow: tripleo.deployment.v1.deploy_on_servers\n on-success: send_message\n on-error: set_collect_logs_on_servers_failed\n input:\n server_name: <% $.server_name %>\n config_name: 'run_sosreport'\n config: |\n #!/bin/bash\n mkdir -p <% $.sos_dir %>\n sosreport --batch \\\n -p <% $.sos_options %> \\\n --tmp-dir <% $.sos_dir %>\n\n set_collect_logs_on_servers_failed:\n on-complete:\n - send_message\n publish:\n type: tripleo.deployment.v1.fetch_logs\n status: FAILED\n message: <% task().result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.collect_logs') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n\n upload_logs:\n description: >\n This workflow uploads the sosreport files stored in the provide sos_dir\n on the provided host (server_uuid) to a swift container on the undercloud\n input:\n - server_uuid\n - server_name\n - container\n - sos_dir: /var/tmp/tripleo-sos\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n # actions\n get_swift_information:\n action: tripleo.swift.swift_information\n on-success: do_log_upload\n on-error: set_get_swift_information_failed\n input:\n container: <% $.container %>\n publish:\n container_url: <% task().result.container_url %>\n auth_key: <% task().result.auth_key %>\n\n set_get_swift_information_failed:\n on-complete:\n - send_message\n publish:\n status: FAILED\n message: <% task(get_swift_information).result %>\n\n do_log_upload:\n action: tripleo.deployment.config\n on-success: send_message\n on-error: set_do_log_upload_failed\n input:\n server_id: <% $.server_uuid %>\n name: \"upload_logs\"\n config: |\n #!/bin/bash\n CONTAINER_URL=\"<% $.container_url %>\"\n TOKEN=\"<% $.auth_key %>\"\n SOS_DIR=\"<% $.sos_dir %>\"\n for FILE in $(find $SOS_DIR -type f); do\n FILENAME=$(basename $FILE)\n curl -X PUT -i -H \"X-Auth-Token: $TOKEN\" -T $FILE $CONTAINER_URL/$FILENAME\n if [ $? -eq 0 ]; then\n rm -f $FILE\n fi\n done\n group: \"script\"\n publish:\n message: \"Uploaded logs from <% $.server_name %>\"\n\n set_do_log_upload_failed:\n on-complete:\n - send_message\n publish:\n status: FAILED\n message: <% tag(do_log_upload).result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.upload_logs') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n\n create_container:\n description: >\n This work flow is used to check if the container exists and creates it\n if it does not exist.\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_container:\n action: swift.head_container container=<% $.container %>\n on-success: send_message\n on-error: create_container\n\n create_container:\n action: swift.put_container\n input:\n container: <% $.container %>\n headers:\n x-container-meta-usage-tripleo: support\n on-success: send_message\n on-error: set_create_container_failed\n\n set_create_container_failed:\n on-complete:\n - send_message\n publish:\n type: tripleo.support.v1.create_container.create_container\n status: FAILED\n message: <% task(create_container).result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.create_container') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n\n delete_container:\n description: >\n This workflow deletes all the objects in a provided swift container and\n then removes the container itself from the undercloud.\n input:\n - container\n - concurrency: 5\n - timeout: 900\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n # actions\n check_container:\n action: swift.head_container container=<% $.container %>\n on-success: list_objects\n on-error: set_check_container_failure\n\n set_check_container_failure:\n on-complete: send_message\n publish:\n status: FAILED\n type: tripleo.support.v1.delete_container.check_container\n message: <% task(check_container).result %>\n\n list_objects:\n action: swift.get_container container=<% $.container %>\n on-success: delete_objects\n on-error: set_list_objects_failure\n publish:\n log_objects: <% task().result[1] %>\n\n set_list_objects_failure:\n on-complete: send_message\n publish:\n status: FAILED\n type: tripleo.support.v1.delete_container.list_objects\n message: <% task(list_objects).result %>\n\n delete_objects:\n action: swift.delete_object\n concurrency: <% $.concurrency %>\n timeout: <% $.timeout %>\n with-items: object in <% $.log_objects %>\n input:\n container: <% $.container %>\n obj: <% $.object.name %>\n on-success: remove_container\n on-error: set_delete_objects_failure\n\n set_delete_objects_failure:\n on-complete: send_message\n publish:\n status: FAILED\n type: tripleo.support.v1.delete_container.delete_objects\n message: <% task(delete_objects).result %>\n\n remove_container:\n action: swift.delete_container container=<% $.container %>\n on-success: send_message\n on-error: set_remove_container_failure\n\n set_remove_container_failure:\n on-complete: send_message\n publish:\n status: FAILED\n type: tripleo.support.v1.delete_container.remove_container\n message: <% task(remove_container).result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n wait-before: 5\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.delete_container') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n\n fetch_logs:\n description: >\n This workflow creates a container on the undercloud, executes the log\n collection on the servers whose names match the provided server_name, and\n executes the log upload process on all the servers to the container on\n the undercloud.\n input:\n - server_name\n - container\n - concurrency: 5\n - timeout: 1800\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n # actions\n create_container:\n workflow: tripleo.support.v1.create_container\n on-success: get_servers_matching\n on-error: set_create_container_failed\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n\n set_create_container_failed:\n on-complete: send_message\n publish:\n type: tripleo.support.v1.fetch_logs.create_container\n status: FAILED\n message: <% task(create_container).result %>\n\n get_servers_matching:\n action: nova.servers_list\n on-success: collect_logs_on_servers\n publish:\n servers_with_name: <% task().result._info.where($.name.indexOf(execution().input.server_name) > -1) %>\n\n collect_logs_on_servers:\n workflow: tripleo.support.v1.collect_logs\n timeout: <% $.timeout %>\n on-success: upload_logs_on_servers\n on-error: set_collect_logs_on_servers_failed\n input:\n server_name: <% $.server_name %>\n queue_name: <% $.queue_name %>\n\n set_collect_logs_on_servers_failed:\n on-complete: send_message\n publish:\n type: tripleo.support.v1.fetch_logs.collect_logs_on_servers\n status: FAILED\n message: <% task(collect_logs_on_servers).result %>\n\n upload_logs_on_servers:\n on-success: send_message\n on-error: set_upload_logs_on_servers_failed\n with-items: server in <% $.servers_with_name %>\n concurrency: <% $.concurrency %>\n workflow: tripleo.support.v1.upload_logs\n input:\n server_name: <% $.server.name %>\n server_uuid: <% $.server.id %>\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n\n set_upload_logs_on_servers_failed:\n on-complete: send_message\n publish:\n type: tripleo.support.v1.fetch_logs.upload_logs\n status: FAILED\n message: <% task(upload_logs_on_servers).result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.fetch_logs') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n", "name": "tripleo.support.v1", "tags": [], "created_at": "2018-06-26 04:26:40", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "915974cf-0e50-4424-b010-ad2cb987e99b"}, {"definition": "---\nversion: '2.0'\nname: tripleo.deployment.v1\ndescription: TripleO deployment workflows\n\nworkflows:\n\n deploy_on_server:\n\n input:\n - server_uuid\n - server_name\n - config\n - config_name\n - group\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n deploy_config:\n action: tripleo.deployment.config\n on-complete: send_message\n input:\n server_id: <% $.server_uuid %>\n name: <% $.config_name %>\n config: <% $.config %>\n group: <% $.group %>\n publish:\n stdout: <% task().result.deploy_stdout %>\n stderr: <% task().result.deploy_stderr %>\n status_code: <% task().result.deploy_status_code %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.deploy_on_server\n payload:\n status: <% $.get(\"status\", \"SUCCESS\") %>\n message: <% $.get(\"message\", \"\") %>\n server_uuid: <% $.server_uuid %>\n server_name: <% $.server_name %>\n config_name: <% $.config_name %>\n status_code: <% $.get(\"status_code\", \"\") %>\n stdout: <% $.get(\"stdout\", \"\") %>\n stderr: <% $.get(\"stderr\", \"\") %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n deploy_on_servers:\n\n input:\n - server_name\n - config_name\n - config\n - group: script\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n check_if_all_servers:\n on-success:\n - get_servers_matching: <% $.server_name != \"all\" %>\n - get_all_servers: <% $.server_name = \"all\" %>\n\n get_servers_matching:\n action: nova.servers_list\n on-success: deploy_on_servers\n publish:\n servers_with_name: <% task().result._info.where($.name.indexOf(execution().input.server_name) > -1) %>\n\n get_all_servers:\n action: nova.servers_list\n on-success: deploy_on_servers\n publish:\n servers_with_name: <% task().result._info %>\n\n deploy_on_servers:\n on-success: send_success_message\n on-error: send_failed_message\n with-items: server in <% $.servers_with_name %>\n workflow: tripleo.deployment.v1.deploy_on_server\n input:\n server_name: <% $.server.name %>\n server_uuid: <% $.server.id %>\n config: <% $.config %>\n config_name: <% $.config_name %>\n group: <% $.group %>\n queue_name: <% $.queue_name %>\n\n send_success_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.deploy_on_servers\n payload:\n status: SUCCESS\n execution: <% execution() %>\n\n send_failed_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.deploy_on_servers\n payload:\n status: FAILED\n message: <% task(deploy_on_servers).result %>\n execution: <% execution() %>\n on-success: fail\n\n deploy_plan:\n\n description: >\n Deploy the overcloud for a plan.\n\n input:\n - container\n - run_validations: False\n - timeout: 240\n - skip_deploy_identifier: False\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n add_validation_ssh_key:\n workflow: tripleo.validations.v1.add_validation_ssh_key_parameter\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n on-complete:\n - run_validations: <% $.run_validations %>\n - create_swift_rings_backup_plan: <% not $.run_validations %>\n\n run_validations:\n workflow: tripleo.validations.v1.run_groups\n input:\n group_names:\n - 'pre-deployment'\n plan: <% $.container %>\n queue_name: <% $.queue_name %>\n on-success: create_swift_rings_backup_plan\n on-error: set_validations_failed\n\n set_validations_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(run_validations).result %>\n\n create_swift_rings_backup_plan:\n workflow: tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan\n on-success: cell_v2_discover_hosts\n on-error: create_swift_rings_backup_plan_set_status_failed\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n use_default_templates: true\n\n cell_v2_discover_hosts:\n on-success: deploy\n on-error: cell_v2_discover_hosts_failed\n action: tripleo.baremetal.cell_v2_discover_hosts\n\n cell_v2_discover_hosts_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(cell_v2_discover_hosts).result %>\n\n deploy:\n action: tripleo.deployment.deploy\n input:\n timeout: <% $.timeout %>\n container: <% $.container %>\n skip_deploy_identifier: <% $.skip_deploy_identifier %>\n on-success: send_message\n on-error: set_deployment_failed\n\n create_swift_rings_backup_plan_set_status_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(create_swift_rings_backup_plan).result %>\n\n set_deployment_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(deploy).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.deploy_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n get_horizon_url:\n\n description: >\n Retrieve the Horizon URL from the Overcloud stack.\n\n input:\n - stack: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n output:\n horizon_url: <% $.horizon_url %>\n\n tasks:\n get_horizon_url:\n action: heat.stacks_get\n input:\n stack_id: <% $.stack %>\n publish:\n horizon_url: <% task().result.outputs.where($.output_key = \"EndpointMap\").output_value.HorizonPublic.uri.single() %>\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.get_horizon_url\n payload:\n horizon_url: <% $.get('horizon_url', '') %>\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n config_download_deploy:\n\n description: >\n Configure the overcloud with config-download.\n\n input:\n - timeout: 240\n - queue_name: tripleo\n - plan_name: overcloud\n - work_dir: /var/lib/mistral\n - verbosity: 1\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_config:\n action: tripleo.config.get_overcloud_config\n input:\n container: <% $.get('plan_name') %>\n on-success: download_config\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n download_config:\n action: tripleo.config.download_config\n input:\n work_dir: <% $.get('work_dir') %>/<% execution().id %>\n on-success: send_msg_config_download\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n send_msg_config_download:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.config_download\n payload:\n status: <% $.get('status', 'RUNNING') %>\n message: Config downloaded at <% $.get('work_dir') %>/<% execution().id %>\n execution: <% execution() %>\n on-success: get_private_key\n\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: generate_inventory\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n generate_inventory:\n action: tripleo.ansible-generate-inventory\n input:\n ansible_ssh_user: tripleo-admin\n work_dir: <% $.get('work_dir') %>/<% execution().id %>\n plan_name: <% $.get('plan_name') %>\n publish:\n inventory: <% task().result %>\n on-success: send_msg_generate_inventory\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n send_msg_generate_inventory:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.config_download\n payload:\n status: <% $.get('status', 'RUNNING') %>\n message: Inventory generated at <% $.get('inventory') %>\n execution: <% execution() %>\n on-success: send_msg_run_ansible\n\n send_msg_run_ansible:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.config_download\n payload:\n status: <% $.get('status', 'RUNNING') %>\n message: >\n Running ansible playbook at <% $.get('work_dir') %>/<% execution().id %>/deploy_steps_playbook.yaml.\n See log file at <% $.get('work_dir') %>/<% execution().id %>/ansible.log for progress.\n ...\n execution: <% execution() %>\n on-success: run_ansible\n\n run_ansible:\n action: tripleo.ansible-playbook\n input:\n inventory: <% $.inventory %>\n playbook: <% $.get('work_dir') %>/<% execution().id %>/deploy_steps_playbook.yaml\n remote_user: tripleo-admin\n ssh_extra_args: '-o StrictHostKeyChecking=no'\n ssh_private_key: <% $.private_key %>\n use_openstack_credentials: true\n verbosity: <% $.get('verbosity') %>\n become: true\n timeout: <% $.timeout %>\n work_dir: <% $.get('work_dir') %>/<% execution().id %>\n queue_name: <% $.queue_name %>\n reproduce_command: true\n trash_output: true\n publish:\n log_path: <% task(run_ansible).result.get('log_path') %>\n on-success:\n - ansible_passed: <% task().result.returncode = 0 %>\n - ansible_failed: <% task().result.returncode != 0 %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: Ansible failed, check log at <% $.get('work_dir') %>/<% execution().id %>/ansible.log.\n\n ansible_passed:\n on-success: send_message\n publish:\n status: SUCCESS\n message: Ansible passed.\n\n ansible_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: Ansible failed, check log at <% $.get('work_dir') %>/<% execution().id %>/ansible.log.\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.config_download\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.deployment.v1", "tags": [], "created_at": "2018-06-26 04:26:41", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "152f70ca-8eab-4b38-a836-846b63b70e23"}, {"definition": "---\nversion: '2.0'\nname: tripleo.octavia_post.v1\ndescription: TripleO Octavia post deployment Workflows\n\nworkflows:\n\n octavia_post_deploy:\n description: Octavia post deployment\n input:\n - amp_image_name\n - amp_image_filename\n - amp_image_tag\n - amp_ssh_key_name\n - amp_ssh_key_path\n - amp_ssh_key_data\n - auth_username\n - auth_password\n - auth_project_name\n - lb_mgmt_net_name\n - lb_mgmt_subnet_name\n - lb_sec_group_name\n - lb_mgmt_subnet_cidr\n - lb_mgmt_subnet_gateway\n - lb_mgmt_subnet_pool_start\n - lb_mgmt_subnet_pool_end\n - generate_certs\n - octavia_ansible_playbook\n - overcloud_admin\n - ca_cert_path\n - ca_private_key_path\n - ca_passphrase\n - client_cert_path\n - mgmt_port_dev\n - overcloud_password\n - overcloud_project\n - overcloud_pub_auth_uri\n - ansible_extra_env_variables:\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n ANSIBLE_SSH_RETRIES: '3'\n tags:\n - tripleo-common-managed\n tasks:\n get_overcloud_stack_details:\n publish:\n # TODO(beagles), we are making an assumption about the octavia heatlh manager and\n # controller worker needing\n #\n octavia_controller_ips: <% env().get('service_ips', {}).get('octavia_worker_ctlplane_node_ips', []) %>\n on-success: enable_ssh_admin\n\n enable_ssh_admin:\n workflow: tripleo.access.v1.enable_ssh_admin\n input:\n ssh_servers: <% $.octavia_controller_ips %>\n on-success: get_private_key\n\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: make_local_temp_directory\n\n make_local_temp_directory:\n action: tripleo.files.make_temp_dir\n publish:\n undercloud_local_dir: <% task().result.path %>\n on-success: make_remote_temp_directory\n\n make_remote_temp_directory:\n action: tripleo.files.make_temp_dir\n publish:\n undercloud_remote_dir: <% task().result.path %>\n on-success: build_local_connection_environment_vars\n\n build_local_connection_environment_vars:\n publish:\n ansible_local_connection_variables: <% dict('ANSIBLE_REMOTE_TEMP' => $.undercloud_remote_dir, 'ANSIBLE_LOCAL_TEMP' => $.undercloud_local_dir) + $.ansible_extra_env_variables %>\n on-success: upload_amphora\n\n upload_amphora:\n action: tripleo.ansible-playbook\n input:\n inventory:\n undercloud:\n hosts:\n localhost:\n ansible_connection: local\n\n playbook: <% $.octavia_ansible_playbook %>\n remote_user: stack\n extra_env_variables: <% $.ansible_local_connection_variables %>\n extra_vars:\n os_password: <% $.overcloud_password %>\n os_username: <% $.overcloud_admin %>\n os_project_name: <% $.overcloud_project %>\n os_auth_url: <% $.overcloud_pub_auth_uri %>\n os_auth_type: \"password\"\n os_identity_api_version: \"3\"\n amp_image_name: <% $.amp_image_name %>\n amp_image_filename: <% $.amp_image_filename %>\n amp_image_tag: <% $.amp_image_tag %>\n amp_ssh_key_name: <% $.amp_ssh_key_name %>\n amp_ssh_key_path: <% $.amp_ssh_key_path %>\n amp_ssh_key_data: <% $.amp_ssh_key_data %>\n auth_username: <% $.auth_username %>\n auth_password: <% $.auth_password %>\n auth_project_name: <% $.auth_project_name %>\n on-success: config_octavia\n\n config_octavia:\n action: tripleo.ansible-playbook\n input:\n inventory:\n octavia_nodes:\n hosts: <% $.octavia_controller_ips.toDict($, {}) %>\n verbosity: 0\n playbook: <% $.octavia_ansible_playbook %>\n remote_user: tripleo-admin\n become: true\n become_user: root\n ssh_private_key: <% $.private_key %>\n ssh_common_args: '-o StrictHostKeyChecking=no'\n ssh_extra_args: '-o UserKnownHostsFile=/dev/null'\n extra_env_variables: <% $.ansible_extra_env_variables %>\n extra_vars:\n os_password: <% $.overcloud_password %>\n os_username: <% $.overcloud_admin %>\n os_project_name: <% $.overcloud_project %>\n os_auth_url: <% $.overcloud_pub_auth_uri %>\n os_auth_type: \"password\"\n os_identity_api_version: \"3\"\n amp_image_tag: <% $.amp_image_tag %>\n lb_mgmt_net_name: <% $.lb_mgmt_net_name %>\n lb_mgmt_subnet_name: <% $.lb_mgmt_subnet_name %>\n lb_sec_group_name: <% $.lb_sec_group_name %>\n lb_mgmt_subnet_cidr: <% $.lb_mgmt_subnet_cidr %>\n lb_mgmt_subnet_gateway: <% $.lb_mgmt_subnet_gateway %>\n lb_mgmt_subnet_pool_start: <% $.lb_mgmt_subnet_pool_start %>\n lb_mgmt_subnet_pool_end: <% $.lb_mgmt_subnet_pool_end %>\n ca_cert_path: <% $.ca_cert_path %>\n ca_private_key_path: <% $.ca_private_key_path %>\n ca_passphrase: <% $.ca_passphrase %>\n client_cert_path: <% $.client_cert_path %>\n generate_certs: <% $.generate_certs %>\n mgmt_port_dev: <% $.mgmt_port_dev %>\n auth_project_name: <% $.auth_project_name %>\n on-complete: purge_local_temp_dir\n purge_local_temp_dir:\n action: tripleo.files.remove_temp_dir path=<% $.undercloud_local_dir %>\n on-complete: purge_remote_temp_dir\n purge_remote_temp_dir:\n action: tripleo.files.remove_temp_dir path=<% $.undercloud_remote_dir %>\n\n", "name": "tripleo.octavia_post.v1", "tags": [], "created_at": "2018-06-26 04:26:44", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "8a791b15-4dc3-43fd-a3b2-1eb3e12d4e50"}, {"definition": "---\nversion: '2.0'\nname: tripleo.baremetal.v1\ndescription: TripleO Baremetal Workflows\n\nworkflows:\n\n set_node_state:\n input:\n - node_uuid\n - state_action\n - target_state\n - error_states:\n # The default includes all failure states, even unused by TripleO.\n - 'error'\n - 'adopt failed'\n - 'clean failed'\n - 'deploy failed'\n - 'inspect failed'\n - 'rescue failed'\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_provision_state:\n on-success: wait_for_provision_state\n on-error: set_provision_state_failed\n action: ironic.node_set_provision_state node_uuid=<% $.node_uuid %> state=<% $.state_action %>\n\n set_provision_state_failed:\n publish:\n message: <% task(set_provision_state).result %>\n on-complete: fail\n\n wait_for_provision_state:\n action: ironic.node_get\n input:\n node_id: <% $.node_uuid %>\n fields: ['provision_state', 'last_error']\n timeout: 1200 #20 minutes\n retry:\n delay: 3\n count: 400\n continue-on: <% not task().result.provision_state in [$.target_state] + $.error_states %>\n on-complete:\n - state_not_reached: <% task().result.provision_state != $.target_state %>\n\n state_not_reached:\n publish:\n message: >-\n Node <% $.node_uuid %> did not reach state \"<% $.target_state %>\",\n the state is \"<% task(wait_for_provision_state).result.provision_state %>\",\n error: <% task(wait_for_provision_state).result.last_error %>\n on-complete: fail\n\n output-on-error:\n result: <% $.message %>\n\n set_power_state:\n input:\n - node_uuid\n - state_action\n - target_state\n - error_state: 'error'\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_power_state:\n on-success: wait_for_power_state\n on-error: set_power_state_failed\n action: ironic.node_set_power_state node_id=<% $.node_uuid %> state=<% $.state_action %>\n\n set_power_state_failed:\n publish:\n message: <% task(set_power_state).result %>\n on-complete: fail\n\n wait_for_power_state:\n action: ironic.node_get\n input:\n node_id: <% $.node_uuid %>\n fields: ['power_state', 'last_error']\n timeout: 120 #2 minutes\n retry:\n delay: 6\n count: 20\n continue-on: <% not task().result.power_state in [$.target_state, $.error_state] %>\n on-complete:\n - state_not_reached: <% task().result.power_state != $.target_state %>\n\n state_not_reached:\n publish:\n message: >-\n Node <% $.node_uuid %> did not reach power state \"<% $.target_state %>\",\n the state is \"<% task(wait_for_power_state).result.power_state %>\",\n error: <% task(wait_for_power_state).result.last_error %>\n on-complete: fail\n\n output-on-error:\n result: <% $.message %>\n\n manual_cleaning:\n input:\n - node_uuid\n - clean_steps\n - timeout: 7200 # 2 hours (cleaning can take really long)\n - retry_delay: 10\n - retry_count: 720\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_provision_state:\n on-success: wait_for_provision_state\n on-error: set_provision_state_failed\n action: ironic.node_set_provision_state node_uuid=<% $.node_uuid %> state='clean' cleansteps=<% $.clean_steps %>\n\n set_provision_state_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_provision_state).result %>\n\n wait_for_provision_state:\n on-success: send_message\n action: ironic.node_get node_id=<% $.node_uuid %>\n timeout: <% $.timeout %>\n retry:\n delay: <% $.retry_delay %>\n count: <% $.retry_count %>\n continue-on: <% task().result.provision_state != 'manageable' %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.manual_cleaning\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n validate_nodes:\n description: Validate nodes JSON\n\n input:\n - nodes_json\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n validate_nodes:\n action: tripleo.baremetal.validate_nodes\n on-success: send_message\n on-error: validation_failed\n input:\n nodes_json: <% $.nodes_json %>\n\n validation_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(validate_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.validate_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n register_or_update:\n description: Take nodes JSON and create nodes in a \"manageable\" state\n\n input:\n - nodes_json\n - remove: False\n - queue_name: tripleo\n - kernel_name: null\n - ramdisk_name: null\n - instance_boot_option: local\n - initial_state: manageable\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n validate_input:\n workflow: tripleo.baremetal.v1.validate_nodes\n on-success: register_or_update_nodes\n on-error: validation_failed\n input:\n nodes_json: <% $.nodes_json %>\n queue_name: <% $.queue_name %>\n\n validation_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(validate_input).result %>\n registered_nodes: []\n\n register_or_update_nodes:\n action: tripleo.baremetal.register_or_update_nodes\n on-success:\n - set_nodes_managed: <% $.initial_state != \"enroll\" %>\n - send_message: <% $.initial_state = \"enroll\" %>\n on-error: set_status_failed_register_or_update_nodes\n input:\n nodes_json: <% $.nodes_json %>\n remove: <% $.remove %>\n kernel_name: <% $.kernel_name %>\n ramdisk_name: <% $.ramdisk_name %>\n instance_boot_option: <% $.instance_boot_option %>\n publish:\n registered_nodes: <% task().result %>\n new_nodes: <% task().result.where($.provision_state = 'enroll') %>\n\n set_status_failed_register_or_update_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(register_or_update_nodes).result %>\n registered_nodes: []\n\n set_nodes_managed:\n on-success:\n - set_nodes_available: <% $.initial_state = \"available\" %>\n - send_message: <% $.initial_state != \"available\" %>\n on-error: set_status_failed_nodes_managed\n workflow: tripleo.baremetal.v1.manage\n input:\n node_uuids: <% $.new_nodes.uuid %>\n queue_name: <% $.queue_name %>\n publish:\n status: SUCCESS\n message: <% $.new_nodes.len() %> node(s) successfully moved to the \"manageable\" state.\n\n set_status_failed_nodes_managed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_nodes_managed).result %>\n\n set_nodes_available:\n on-success: send_message\n on-error: set_status_failed_nodes_available\n workflow: tripleo.baremetal.v1.provide node_uuids=<% $.new_nodes.uuid %> queue_name=<% $.queue_name %>\n publish:\n status: SUCCESS\n message: <% $.new_nodes.len() %> node(s) successfully moved to the \"available\" state.\n\n set_status_failed_nodes_available:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_nodes_available).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.register_or_update\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n registered_nodes: <% $.registered_nodes or [] %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n provide:\n description: Take a list of nodes and move them to \"available\"\n\n input:\n - node_uuids\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_nodes_available:\n on-success: cell_v2_discover_hosts\n on-error: set_status_failed_nodes_available\n with-items: uuid in <% $.node_uuids %>\n workflow: tripleo.baremetal.v1.set_node_state\n input:\n node_uuid: <% $.uuid %>\n queue_name: <% $.queue_name %>\n state_action: 'provide'\n target_state: 'available'\n\n set_status_failed_nodes_available:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_nodes_available).result %>\n\n cell_v2_discover_hosts:\n on-success: try_power_off\n on-error: cell_v2_discover_hosts_failed\n workflow: tripleo.baremetal.v1.cellv2_discovery\n input:\n node_uuids: <% $.node_uuids %>\n queue_name: <% $.queue_name %>\n timeout: 900 #15 minutes\n retry:\n delay: 30\n count: 30\n\n cell_v2_discover_hosts_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(cell_v2_discover_hosts).result %>\n\n try_power_off:\n on-success: send_message\n on-error: power_off_failed\n with-items: uuid in <% $.node_uuids %>\n workflow: tripleo.baremetal.v1.set_power_state\n input:\n node_uuid: <% $.uuid %>\n queue_name: <% $.queue_name %>\n state_action: 'off'\n target_state: 'power off'\n publish:\n status: SUCCESS\n message: <% $.node_uuids.len() %> node(s) successfully moved to the \"available\" state.\n\n power_off_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(try_power_off).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.provide\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n provide_manageable_nodes:\n description: Provide all nodes in a 'manageable' state.\n\n input:\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_manageable_nodes:\n action: ironic.node_list maintenance=False associated=False\n on-success: provide_manageable\n on-error: set_status_failed_get_manageable_nodes\n publish:\n managed_nodes: <% task().result.where($.provision_state = 'manageable').uuid %>\n\n set_status_failed_get_manageable_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_manageable_nodes).result %>\n\n provide_manageable:\n on-success: send_message\n workflow: tripleo.baremetal.v1.provide\n input:\n node_uuids: <% $.managed_nodes %>\n queue_name: <% $.queue_name %>\n publish:\n status: SUCCESS\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.provide_manageable_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n manage:\n description: Set a list of nodes to 'manageable' state\n\n input:\n - node_uuids\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_nodes_manageable:\n on-success: send_message\n on-error: set_status_failed_nodes_manageable\n with-items: uuid in <% $.node_uuids %>\n workflow: tripleo.baremetal.v1.set_node_state\n input:\n node_uuid: <% $.uuid %>\n state_action: 'manage'\n target_state: 'manageable'\n error_states:\n # node going back to enroll designates power credentials failure\n - 'enroll'\n - 'error'\n\n set_status_failed_nodes_manageable:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_nodes_manageable).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.manage\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n _introspect:\n description: >\n An internal workflow. The tripleo.baremetal.v1.introspect workflow\n should be used for introspection.\n\n input:\n - node_uuid\n - timeout\n - queue_name\n\n output:\n result: <% task(start_introspection).result %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n start_introspection:\n action: baremetal_introspection.introspect uuid=<% $.node_uuid %>\n on-success: wait_for_introspection_to_finish\n on-error: set_status_failed_start_introspection\n\n set_status_failed_start_introspection:\n publish:\n status: FAILED\n message: <% task(start_introspection).result %>\n introspected_nodes: []\n on-success: send_message\n\n wait_for_introspection_to_finish:\n action: baremetal_introspection.wait_for_finish\n input:\n uuids: <% [$.node_uuid] %>\n # The interval is 10 seconds, so divide to make the overall timeout\n # in seconds correct.\n max_retries: <% $.timeout / 10 %>\n retry_interval: 10\n publish:\n introspected_node: <% task().result.values().first() %>\n status: <% bool(task().result.values().first().error) and \"FAILED\" or \"SUCCESS\" %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-success: wait_for_introspection_to_finish_success\n on-error: wait_for_introspection_to_finish_error\n\n wait_for_introspection_to_finish_success:\n publish:\n message: <% \"Introspection of node {0} completed. Status:{1}. Errors:{2}\".format($.introspected_node.uuid, $.status, $.introspected_node.error) %>\n on-success: send_message\n\n wait_for_introspection_to_finish_error:\n publish:\n message: <% \"Introspection of node {0} timed out.\".format($.node_uuid) %>\n on-success: send_message\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1._introspect\n payload:\n status: <% $.status %>\n message: <% $.message %>\n introspected_node: <% $.get('introspected_node') %>\n node_uuid: <% $.node_uuid %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n introspect:\n description: >\n Take a list of nodes and move them through introspection.\n\n By default each node will attempt introspection up to 3 times (two\n retries plus the initial attemp) if it fails. This behaviour can be\n modified by changing the max_retry_attempts input.\n\n The workflow will assume the node has timed out after 20 minutes (1200\n seconds). This can be changed by passing the node_timeout input in\n seconds.\n\n input:\n - node_uuids\n - run_validations: False\n - queue_name: tripleo\n - concurrency: 20\n - max_retry_attempts: 2\n - node_timeout: 1200\n\n tags:\n - tripleo-common-managed\n\n task-defaults:\n on-error: unhandled_error\n\n tasks:\n initialize:\n publish:\n introspection_attempt: 1\n on-complete:\n - run_validations: <% $.run_validations %>\n - introspect_nodes: <% not $.run_validations %>\n\n run_validations:\n workflow: tripleo.validations.v1.run_groups\n input:\n group_names:\n - 'pre-introspection'\n queue_name: <% $.queue_name %>\n on-success: introspect_nodes\n on-error: set_validations_failed\n\n set_validations_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(run_validations).result %>\n\n introspect_nodes:\n with-items: uuid in <% $.node_uuids %>\n concurrency: <% $.concurrency %>\n workflow: _introspect\n input:\n node_uuid: <% $.uuid %>\n queue_name: <% $.queue_name %>\n timeout: <% $.node_timeout %>\n # on-error is triggered if one or more nodes failed introspection. We\n # still go to get_introspection_status as it will collect the result\n # for each node. Unless we hit the retry limit.\n on-error:\n - get_introspection_status: <% $.introspection_attempt <= $.max_retry_attempts %>\n - max_retry_attempts_reached: <% $.introspection_attempt > $.max_retry_attempts %>\n on-success: get_introspection_status\n\n get_introspection_status:\n with-items: uuid in <% $.node_uuids %>\n action: baremetal_introspection.get_status\n input:\n uuid: <% $.uuid %>\n publish:\n introspected_nodes: <% task().result.toDict($.uuid, $) %>\n # Currently there is no way for us to ignore user introspection\n # aborts. This means we will retry aborted nodes until the Ironic API\n # gives us more details (error code or a boolean to show aborts etc.)\n # If a node hasn't finished, we consider it to be failed.\n # TODO(d0ugal): When possible, don't retry introspection of nodes\n # that a user manually aborted.\n failed_introspection: <% task().result.where($.finished = true and $.error != null).select($.uuid) + task().result.where($.finished = false).select($.uuid) %>\n publish-on-error:\n # If a node fails to start introspection, getting the status can fail.\n # When that happens, the result is a string and the nodes need to be\n # filtered out.\n introspected_nodes: <% task().result.where(isDict($)).toDict($.uuid, $) %>\n # If there was an error, the exception string we get doesn't give us\n # the UUID. So we use a set difference to find the UUIDs missing in\n # the results. These are then added to the failed nodes.\n failed_introspection: <% ($.node_uuids.toSet() - task().result.where(isDict($)).select($.uuid).toSet()) + task().result.where(isDict($)).where($.finished = true and $.error != null).toSet() + task().result.where(isDict($)).where($.finished = false).toSet() %>\n on-error: increase_attempt_counter\n on-success:\n - successful_introspection: <% $.failed_introspection.len() = 0 %>\n - increase_attempt_counter: <% $.failed_introspection.len() > 0 %>\n\n increase_attempt_counter:\n publish:\n introspection_attempt: <% $.introspection_attempt + 1 %>\n on-complete:\n retry_failed_nodes\n\n retry_failed_nodes:\n publish:\n status: RUNNING\n message: <% 'Retrying {0} nodes that failed introspection. Attempt {1} of {2} '.format($.failed_introspection.len(), $.introspection_attempt, $.max_retry_attempts + 1) %>\n # We are about to retry, update the tracking stats.\n node_uuids: <% $.failed_introspection %>\n on-success:\n - send_message\n - introspect_nodes\n\n max_retry_attempts_reached:\n publish:\n status: FAILED\n message: <% 'Retry limit reached with {0} nodes still failing introspection'.format($.failed_introspection.len()) %>\n on-complete: send_message\n\n successful_introspection:\n publish:\n status: SUCCESS\n message: Successfully introspected <% $.introspected_nodes.len() %> node(s).\n on-complete: send_message\n\n unhandled_error:\n publish:\n status: FAILED\n message: \"Unhandled workflow error\"\n on-complete: send_message\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.introspect\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n introspected_nodes: <% $.get('introspected_nodes', []) %>\n failed_introspection: <% $.get('failed_introspection', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n introspect_manageable_nodes:\n description: Introspect all nodes in a 'manageable' state.\n\n input:\n - run_validations: False\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_manageable_nodes:\n action: ironic.node_list maintenance=False associated=False\n on-success: validate_nodes\n on-error: set_status_failed_get_manageable_nodes\n publish:\n managed_nodes: <% task().result.where($.provision_state = 'manageable').uuid %>\n\n set_status_failed_get_manageable_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_manageable_nodes).result %>\n\n validate_nodes:\n on-success:\n - introspect_manageable: <% $.managed_nodes.len() > 0 %>\n - set_status_failed_no_nodes: <% $.managed_nodes.len() = 0 %>\n\n set_status_failed_no_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: No manageable nodes to introspect. Check node states and maintenance.\n\n introspect_manageable:\n on-success: send_message\n on-error: set_status_introspect_manageable\n workflow: tripleo.baremetal.v1.introspect\n input:\n node_uuids: <% $.managed_nodes %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n introspected_nodes: <% task().result.introspected_nodes %>\n\n set_status_introspect_manageable:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(introspect_manageable).result %>\n introspected_nodes: []\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.introspect_manageable_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n introspected_nodes: <% $.get('introspected_nodes', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n configure:\n description: Take a list of manageable nodes and update their boot configuration.\n\n input:\n - node_uuids\n - queue_name: tripleo\n - kernel_name: bm-deploy-kernel\n - ramdisk_name: bm-deploy-ramdisk\n - instance_boot_option: null\n - root_device: null\n - root_device_minimum_size: 4\n - overwrite_root_device_hints: False\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n configure_boot:\n on-success: configure_root_device\n on-error: set_status_failed_configure_boot\n with-items: node_uuid in <% $.node_uuids %>\n action: tripleo.baremetal.configure_boot node_uuid=<% $.node_uuid %> kernel_name=<% $.kernel_name %> ramdisk_name=<% $.ramdisk_name %> instance_boot_option=<% $.instance_boot_option %>\n\n configure_root_device:\n on-success: send_message\n on-error: set_status_failed_configure_root_device\n with-items: node_uuid in <% $.node_uuids %>\n action: tripleo.baremetal.configure_root_device node_uuid=<% $.node_uuid %> root_device=<% $.root_device %> minimum_size=<% $.root_device_minimum_size %> overwrite=<% $.overwrite_root_device_hints %>\n publish:\n status: SUCCESS\n message: 'Successfully configured the nodes.'\n\n set_status_failed_configure_boot:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(configure_boot).result %>\n\n set_status_failed_configure_root_device:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(configure_root_device).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.configure\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n configure_manageable_nodes:\n description: Update the boot configuration of all nodes in 'manageable' state.\n\n input:\n - queue_name: tripleo\n - kernel_name: 'bm-deploy-kernel'\n - ramdisk_name: 'bm-deploy-ramdisk'\n - instance_boot_option: null\n - root_device: null\n - root_device_minimum_size: 4\n - overwrite_root_device_hints: False\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_manageable_nodes:\n action: ironic.node_list maintenance=False associated=False\n on-success: configure_manageable\n on-error: set_status_failed_get_manageable_nodes\n publish:\n managed_nodes: <% task().result.where($.provision_state = 'manageable').uuid %>\n\n configure_manageable:\n on-success: send_message\n on-error: set_status_failed_configure_manageable\n workflow: tripleo.baremetal.v1.configure\n input:\n node_uuids: <% $.managed_nodes %>\n queue_name: <% $.queue_name %>\n kernel_name: <% $.kernel_name %>\n ramdisk_name: <% $.ramdisk_name %>\n instance_boot_option: <% $.instance_boot_option %>\n root_device: <% $.root_device %>\n root_device_minimum_size: <% $.root_device_minimum_size %>\n overwrite_root_device_hints: <% $.overwrite_root_device_hints %>\n publish:\n message: 'Manageable nodes configured successfully.'\n\n set_status_failed_configure_manageable:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(configure_manageable).result %>\n\n set_status_failed_get_manageable_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_manageable_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.configure_manageable_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n tag_node:\n description: Tag a node with a role\n input:\n - node_uuid\n - role: null\n - queue_name: tripleo\n\n task-defaults:\n on-error: send_message\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n update_node:\n on-success: send_message\n action: tripleo.baremetal.update_node_capability node_uuid=<% $.node_uuid %> capability='profile' value=<% $.role %>\n publish:\n message: <% task().result %>\n status: SUCCESS\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.tag_node\n payload:\n status: <% $.get('status', 'FAILED') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n tag_nodes:\n description: Runs the tag_node workflow in a loop\n input:\n - tag_node_uuids\n - untag_node_uuids\n - role\n - plan: overcloud\n - queue_name: tripleo\n\n task-defaults:\n on-error: send_message\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n tag_nodes:\n with-items: node_uuid in <% $.tag_node_uuids %>\n workflow: tripleo.baremetal.v1.tag_node\n input:\n node_uuid: <% $.node_uuid %>\n queue_name: <% $.queue_name %>\n role: <% $.role %>\n concurrency: 1\n on-success: untag_nodes\n\n untag_nodes:\n with-items: node_uuid in <% $.untag_node_uuids %>\n workflow: tripleo.baremetal.v1.tag_node\n input:\n node_uuid: <% $.node_uuid %>\n queue_name: <% $.queue_name %>\n concurrency: 1\n on-success: update_role_parameters\n\n update_role_parameters:\n on-success: send_message\n action: tripleo.parameters.update_role role=<% $.role %> container=<% $.plan %>\n publish:\n message: <% task().result %>\n status: SUCCESS\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.tag_nodes\n payload:\n status: <% $.get('status', 'FAILED') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n nodes_with_profile:\n description: Find nodes with a specific profile\n input:\n - profile\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_active_nodes:\n action: ironic.node_list maintenance=false provision_state='active' detail=true\n on-success: get_available_nodes\n on-error: set_status_failed_get_active_nodes\n\n get_available_nodes:\n action: ironic.node_list maintenance=false provision_state='available' detail=true\n on-success: get_matching_nodes\n on-error: set_status_failed_get_available_nodes\n\n get_matching_nodes:\n with-items: node in <% task(get_available_nodes).result + task(get_active_nodes).result %>\n action: tripleo.baremetal.get_profile node=<% $.node %>\n on-success: send_message\n on-error: set_status_failed_get_matching_nodes\n publish:\n matching_nodes: <% let(input_profile_name => $.profile) -> task().result.where($.profile = $input_profile_name).uuid %>\n\n set_status_failed_get_active_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_active_nodes).result %>\n\n set_status_failed_get_available_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_available_nodes).result %>\n\n set_status_failed_get_matching_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_matching_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.nodes_with_profile\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n matching_nodes: <% $.matching_nodes or [] %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n create_raid_configuration:\n description: Create and apply RAID configuration for given nodes\n input:\n - node_uuids\n - configuration\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_configuration:\n with-items: node_uuid in <% $.node_uuids %>\n action: ironic.node_set_target_raid_config node_ident=<% $.node_uuid %> target_raid_config=<% $.configuration %>\n on-success: apply_configuration\n on-error: set_configuration_failed\n\n set_configuration_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_configuration).result %>\n\n apply_configuration:\n with-items: node_uuid in <% $.node_uuids %>\n workflow: tripleo.baremetal.v1.manual_cleaning\n input:\n node_uuid: <% $.node_uuid %>\n clean_steps:\n - interface: raid\n step: delete_configuration\n - interface: raid\n step: create_configuration\n timeout: 1800 # building RAID should be fast than general cleaning\n retry_count: 180\n retry_delay: 10\n on-success: send_message\n on-error: apply_configuration_failed\n publish:\n message: <% task().result %>\n status: SUCCESS\n\n apply_configuration_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(apply_configuration).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.create_raid_configuration\n payload:\n status: <% $.get('status', 'FAILED') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n\n cellv2_discovery:\n description: Run cell_v2 host discovery\n\n input:\n - node_uuids\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n cell_v2_discover_hosts:\n on-success: wait_for_nova_resources\n on-error: cell_v2_discover_hosts_failed\n action: tripleo.baremetal.cell_v2_discover_hosts\n\n cell_v2_discover_hosts_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(cell_v2_discover_hosts).result %>\n\n wait_for_nova_resources:\n on-success: send_message\n on-error: wait_for_nova_resources_failed\n with-items: node_uuid in <% $.node_uuids %>\n action: nova.hypervisors_find hypervisor_hostname=<% $.node_uuid %>\n\n wait_for_nova_resources_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(wait_for_nova_resources).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.cellv2_discovery\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n\n discover_nodes:\n description: Run nodes discovery over the given IP range\n\n input:\n - ip_addresses\n - credentials\n - ports: [623]\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_all_nodes:\n action: ironic.node_list\n input:\n fields: [\"uuid\", \"driver\", \"driver_info\"]\n limit: 0\n on-success: get_candidate_nodes\n on-error: get_all_nodes_failed\n publish:\n existing_nodes: <% task().result %>\n\n get_all_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_all_nodes).result %>\n\n get_candidate_nodes:\n action: tripleo.baremetal.get_candidate_nodes\n input:\n ip_addresses: <% $.ip_addresses %>\n credentials: <% $.credentials %>\n ports: <% $.ports %>\n existing_nodes: <% $.existing_nodes %>\n on-success: probe_nodes\n on-error: get_candidate_nodes_failed\n publish:\n candidates: <% task().result %>\n\n get_candidate_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_candidate_nodes).result %>\n\n probe_nodes:\n action: tripleo.baremetal.probe_node\n on-success: send_message\n on-error: probe_nodes_failed\n input:\n ip: <% $.node.ip %>\n port: <% $.node.port %>\n username: <% $.node.username %>\n password: <% $.node.password %>\n with-items:\n - node in <% $.candidates %>\n publish:\n nodes_json: <% task().result.where($ != null) %>\n\n probe_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(probe_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.discover_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n nodes_json: <% $.get('nodes_json', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n discover_and_enroll_nodes:\n description: Run nodes discovery over the given IP range and enroll nodes\n\n input:\n - ip_addresses\n - credentials\n - ports: [623]\n - kernel_name: null\n - ramdisk_name: null\n - instance_boot_option: local\n - initial_state: manageable\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n discover_nodes:\n workflow: tripleo.baremetal.v1.discover_nodes\n input:\n ip_addresses: <% $.ip_addresses %>\n ports: <% $.ports %>\n credentials: <% $.credentials %>\n queue_name: <% $.queue_name %>\n on-success: enroll_nodes\n on-error: discover_nodes_failed\n publish:\n nodes_json: <% task().result.nodes_json %>\n\n discover_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(discover_nodes).result %>\n\n enroll_nodes:\n workflow: tripleo.baremetal.v1.register_or_update\n input:\n nodes_json: <% $.nodes_json %>\n kernel_name: <% $.kernel_name %>\n ramdisk_name: <% $.ramdisk_name %>\n instance_boot_option: <% $.instance_boot_option %>\n initial_state: <% $.initial_state %>\n on-success: send_message\n on-error: enroll_nodes_failed\n publish:\n registered_nodes: <% task().result.registered_nodes %>\n\n enroll_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(enroll_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.discover_and_enroll_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n registered_nodes: <% $.get('registered_nodes', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1", "tags": [], "created_at": "2018-06-26 04:26:44", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "cf4a45ce-3d63-44a4-bb79-4cc454eff440"}, {"definition": "---\nversion: '2.0'\nname: tripleo.scale.v1\ndescription: TripleO Overcloud Deployment Workflows v1\n\nworkflows:\n\n delete_node:\n description: deletes given overcloud nodes and updates the stack\n\n input:\n - container\n - nodes\n - timeout: 240\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n delete_node:\n action: tripleo.scale.delete_node nodes=<% $.nodes %> timeout=<% $.timeout %> container=<% $.container %>\n on-success: wait_for_stack_in_progress\n on-error: set_delete_node_failed\n\n set_delete_node_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(delete_node).result %>\n\n wait_for_stack_in_progress:\n workflow: tripleo.stack.v1.wait_for_stack_in_progress stack=<% $.container %>\n on-success: wait_for_stack_complete\n on-error: wait_for_stack_in_progress_failed\n\n wait_for_stack_in_progress_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(wait_for_stack_in_progress).result %>\n\n wait_for_stack_complete:\n workflow: tripleo.stack.v1.wait_for_stack_complete_or_failed stack=<% $.container %>\n on-success: send_message\n on-error: wait_for_stack_complete_failed\n\n wait_for_stack_complete_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(wait_for_stack_complete).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.scale.v1.delete_node\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.scale.v1", "tags": [], "created_at": "2018-06-26 04:26:44", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "eb71ceb5-2707-4ee0-beb3-42bd0af02456"}, {"definition": "---\nversion: '2.0'\nname: tripleo.storage.v1\ndescription: TripleO manages Ceph with ceph-ansible\n\nworkflows:\n ceph-install:\n # allows for additional extra_vars via workflow input\n input:\n - ansible_playbook_verbosity: 0\n - ansible_skip_tags: 'package-install,with_pkg'\n - ansible_env_variables: {}\n - ansible_extra_env_variables:\n ANSIBLE_CONFIG: /usr/share/ceph-ansible/ansible.cfg\n ANSIBLE_ACTION_PLUGINS: /usr/share/ceph-ansible/plugins/actions/\n ANSIBLE_ROLES_PATH: /usr/share/ceph-ansible/roles/\n ANSIBLE_RETRY_FILES_ENABLED: 'False'\n ANSIBLE_LOG_PATH: /var/log/mistral/ceph-install-workflow.log\n ANSIBLE_LIBRARY: /usr/share/ceph-ansible/library/\n ANSIBLE_SSH_RETRIES: '3'\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n DEFAULT_FORKS: '25'\n - ceph_ansible_extra_vars: {}\n - ceph_ansible_playbook: /usr/share/ceph-ansible/site-docker.yml.sample\n - node_data_lookup: '{}'\n tags:\n - tripleo-common-managed\n tasks:\n collect_puppet_hieradata:\n on-success: check_hieradata\n publish:\n hieradata: <% env().get('role_merged_configs', {}).values().select($.keys()).flatten().select(regex('^ceph::profile::params::osds$').search($)).where($ != null).toSet() %>\n check_hieradata:\n on-success:\n - set_blacklisted_ips: <% not bool($.hieradata) %>\n - fail(msg=<% 'Ceph deployment stopped, puppet-ceph hieradata found. Convert it into ceph-ansible variables. {0}'.format($.hieradata) %>): <% bool($.hieradata) %>\n set_blacklisted_ips:\n publish:\n blacklisted_ips: <% env().get('blacklisted_ip_addresses', []) %>\n on-success: set_ip_lists\n set_ip_lists:\n publish:\n mgr_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_mgr_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n mon_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_mon_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n osd_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_osd_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n mds_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_mds_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n rgw_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_rgw_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n nfs_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_nfs_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n rbdmirror_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_rbdmirror_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n client_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_client_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n on-success: merge_ip_lists\n merge_ip_lists:\n publish:\n ips_list: <% ($.mgr_ips + $.mon_ips + $.osd_ips + $.mds_ips + $.rgw_ips + $.nfs_ips + $.rbdmirror_ips + $.client_ips).toSet() %>\n on-success: enable_ssh_admin\n enable_ssh_admin:\n workflow: tripleo.access.v1.enable_ssh_admin\n input:\n ssh_servers: <% $.ips_list %>\n on-success: get_private_key\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: make_fetch_directory\n make_fetch_directory:\n action: tripleo.files.make_temp_dir\n publish:\n fetch_directory: <% task().result.path %>\n on-success: collect_nodes_uuid\n collect_nodes_uuid:\n action: tripleo.ansible-playbook\n input:\n inventory:\n overcloud:\n hosts: <% $.ips_list.toDict($, {}) %>\n remote_user: tripleo-admin\n become: true\n become_user: root\n verbosity: 0\n ssh_private_key: <% $.private_key %>\n #NOTE(gfidente): set ANSIBLE_CALLBACK_WHITELIST to empty string to avoid spurious output\n #in the json output. The publish: directive will in fact parse the output.\n extra_env_variables:\n ANSIBLE_CALLBACK_WHITELIST: ''\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n ANSIBLE_STDOUT_CALLBACK: 'json'\n playbook:\n - hosts: overcloud\n gather_facts: no\n tasks:\n - name: collect machine id\n command: dmidecode -s system-uuid\n publish:\n ansible_output: <% json_parse(task().result.stderr) %>\n on-success: set_ip_uuids\n set_ip_uuids:\n publish:\n ip_uuids: <% let(root => $.ansible_output.get('plays')[0].get('tasks')[0].get('hosts')) -> $.ips_list.toDict($, $root.get($).get('stdout')) %>\n on-success: parse_node_data_lookup\n parse_node_data_lookup:\n publish:\n json_node_data_lookup: <% json_parse($.node_data_lookup) %>\n on-success: map_node_data_lookup\n map_node_data_lookup:\n publish:\n ips_data: <% let(uuids => $.ip_uuids, root => $) -> $.ips_list.toDict($, $root.json_node_data_lookup.get($uuids.get($, \"NO-UUID-FOUND\"), {})) %>\n on-success: set_role_vars\n set_role_vars:\n publish:\n # NOTE(gfidente): collect role settings from all tht roles\n mgr_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_mgr_ansible_vars', {})).aggregate($1 + $2) %>\n mon_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_mon_ansible_vars', {})).aggregate($1 + $2) %>\n osd_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_osd_ansible_vars', {})).aggregate($1 + $2) %>\n mds_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_mds_ansible_vars', {})).aggregate($1 + $2) %>\n rgw_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_rgw_ansible_vars', {})).aggregate($1 + $2) %>\n nfs_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_nfs_ansible_vars', {})).aggregate($1 + $2) %>\n rbdmirror_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_rbdmirror_ansible_vars', {})).aggregate($1 + $2) %>\n client_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_client_ansible_vars', {})).aggregate($1 + $2) %>\n on-success: build_extra_vars\n build_extra_vars:\n publish:\n # NOTE(gfidente): merge vars from all ansible roles\n extra_vars: <% {'fetch_directory'=> $.fetch_directory} + $.mgr_vars + $.mon_vars + $.osd_vars + $.mds_vars + $.rgw_vars + $.nfs_vars + $.client_vars + $.rbdmirror_vars + $.ceph_ansible_extra_vars %>\n on-success: ceph_install\n ceph_install:\n with-items: playbook in <% list($.ceph_ansible_playbook).flatten() %>\n concurrency: 1\n action: tripleo.ansible-playbook\n input:\n inventory:\n mgrs:\n hosts: <% let(root => $) -> $.mgr_ips.toDict($, $root.ips_data.get($, {})) %>\n mons:\n hosts: <% let(root => $) -> $.mon_ips.toDict($, $root.ips_data.get($, {})) %>\n osds:\n hosts: <% let(root => $) -> $.osd_ips.toDict($, $root.ips_data.get($, {})) %>\n mdss:\n hosts: <% let(root => $) -> $.mds_ips.toDict($, $root.ips_data.get($, {})) %>\n rgws:\n hosts: <% let(root => $) -> $.rgw_ips.toDict($, $root.ips_data.get($, {})) %>\n nfss:\n hosts: <% let(root => $) -> $.nfs_ips.toDict($, $root.ips_data.get($, {})) %>\n rbdmirrors:\n hosts: <% let(root => $) -> $.rbdmirror_ips.toDict($, $root.ips_data.get($, {})) %>\n clients:\n hosts: <% let(root => $) -> $.client_ips.toDict($, $root.ips_data.get($, {})) %>\n all:\n vars: <% $.extra_vars %>\n playbook: <% $.playbook %>\n remote_user: tripleo-admin\n become: true\n become_user: root\n verbosity: <% $.ansible_playbook_verbosity %>\n ssh_private_key: <% $.private_key %>\n skip_tags: <% $.ansible_skip_tags %>\n extra_env_variables: <% $.ansible_extra_env_variables.mergeWith($.ansible_env_variables) %>\n extra_vars:\n ireallymeanit: 'yes'\n publish:\n output: <% task().result %>\n on-complete: purge_fetch_directory\n purge_fetch_directory:\n action: tripleo.files.remove_temp_dir path=<% $.fetch_directory %>\n", "name": "tripleo.storage.v1", "tags": [], "created_at": "2018-06-26 04:26:44", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "f885c0b0-d74b-4b91-a497-5934fa4ab3e6"}, {"definition": "---\nversion: '2.0'\nname: tripleo.swift_ring.v1\ndescription: Rebalance and distribute Swift rings using Ansible\n\n\nworkflows:\n rebalance:\n tags:\n - tripleo-common-managed\n\n tasks:\n get_private_key:\n action: tripleo.validations.get_privkey\n on-success: deploy_rings\n\n deploy_rings:\n action: tripleo.ansible-playbook\n publish:\n output: <% task().result %>\n input:\n ssh_private_key: <% task(get_private_key).result %>\n ssh_common_args: '-o StrictHostKeyChecking=no'\n ssh_extra_args: '-o UserKnownHostsFile=/dev/null'\n verbosity: 1\n remote_user: heat-admin\n become: true\n become_user: root\n playbook: /usr/share/tripleo-common/playbooks/swift_ring_rebalance.yaml\n inventory: /usr/bin/tripleo-ansible-inventory\n use_openstack_credentials: true\n", "name": "tripleo.swift_ring.v1", "tags": [], "created_at": "2018-06-26 04:26:45", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "39757856-dbd7-41f3-ad84-d1a9a7398cee"}, {"definition": "---\nversion: '2.0'\nname: tripleo.fernet_keys.v1\ndescription: TripleO fernet key rotation workflows\n\nworkflows:\n\n rotate_fernet_keys:\n\n input:\n - container\n - queue_name: tripleo\n - ansible_extra_env_variables:\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n rotate_keys:\n action: tripleo.parameters.rotate_fernet_keys container=<% $.container %>\n on-success: deploy_ssh_key\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n deploy_ssh_key:\n workflow: tripleo.validations.v1.copy_ssh_key\n on-success: get_privkey\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_privkey:\n action: tripleo.validations.get_privkey\n on-success: deploy_keys\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n deploy_keys:\n action: tripleo.ansible-playbook\n input:\n hosts: keystone\n inventory: /usr/bin/tripleo-ansible-inventory\n ssh_private_key: <% task(get_privkey).result %>\n extra_env_variables: <% $.ansible_extra_env_variables + dict(TRIPLEO_PLAN_NAME=>$.container) %>\n verbosity: 0\n remote_user: heat-admin\n become: true\n extra_vars:\n fernet_keys: <% task(rotate_keys).result %>\n use_openstack_credentials: true\n playbook: /usr/share/tripleo-common/playbooks/rotate-keys.yaml\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.fernet_keys.v1.rotate_fernet_keys\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.fernet_keys.v1", "tags": [], "created_at": "2018-06-26 04:26:45", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "b1b21be4-a014-4093-8bae-efcf68ce7756"}, {"definition": "---\nversion: '2.0'\nname: tripleo.networks.v1\ndescription: TripleO Overcloud Networks Workflows v1\n\nworkflows:\n\n validate_networks_input:\n description: >\n Validate that required fields are present.\n\n input:\n - networks\n - queue_name: tripleo\n\n output:\n result: <% task(validate_network_names).result %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n validate_network_names:\n publish:\n network_name_present: <% $.networks.all($.containsKey('name')) %>\n on-success:\n - set_status_success: <% $.network_name_present = true %>\n - set_status_error: <% $.network_name_present = false %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(validate_network_names).result %>\n\n set_status_error:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: \"One or more entries did not contain the required field 'name'\"\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.networks.v1.validate_networks_input\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n update_networks:\n description: >\n Takes data in networks parameter in json format, validates its contents,\n and persists them in network_data.yaml. After successful update,\n templates are regenerated.\n\n input:\n - container: overcloud\n - networks\n - network_data_file: 'network_data.yaml'\n - queue_name: tripleo\n\n output:\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n validate_input:\n description: >\n validate the format of input (input includes required fields for\n each network)\n workflow: validate_networks_input\n input:\n networks: <% $.networks %>\n on-success: validate_network_files\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n validate_network_files:\n description: >\n validate that Network names exist in Swift container\n workflow: tripleo.plan_management.v1.validate_network_files\n input:\n container: <% $.container %>\n network_data: <% $.networks %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().network_data %>\n on-success: get_available_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n get_available_networks:\n workflow: tripleo.plan_management.v1.list_available_networks\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n publish:\n available_networks: <% task().result.available_networks %>\n on-success: get_current_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n get_current_networks:\n workflow: tripleo.plan_management.v1.get_network_data\n input:\n container: <% $.container %>\n network_data_file: <% $.network_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n current_networks: <% task().result.network_data %>\n on-success: update_network_data\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n update_network_data:\n description: >\n Combine (or replace) the network data\n action: tripleo.plan.update_networks\n input:\n networks: <% $.available_networks %>\n current_networks: <% $.current_networks %>\n remove_all: false\n publish:\n new_network_data: <% task().result.network_data %>\n on-success: update_network_data_in_swift\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n update_network_data_in_swift:\n description: >\n update network_data.yaml object in Swift with data from workflow input\n action: swift.put_object\n input:\n container: <% $.container %>\n obj: <% $.network_data_file %>\n contents: <% yaml_dump($.new_network_data) %>\n on-success: regenerate_templates\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n regenerate_templates:\n action: tripleo.templates.process container=<% $.container %>\n on-success: get_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n get_networks:\n description: >\n run GetNetworksAction to get updated contents of network_data.yaml and\n provide it as output\n workflow: tripleo.plan_management.v1.get_network_data\n input:\n container: <% $.container %>\n network_data_file: <% $.network_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().network_data %>\n on-success: set_status_success\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(get_networks).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.networks.v1.update_networks\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.networks.v1", "tags": [], "created_at": "2018-06-26 04:26:45", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "ef92c55f-7d92-4808-bc2d-88757ba27c36"}, {"definition": "---\nversion: '2.0'\nname: tripleo.package_update.v1\ndescription: TripleO update workflows\n\nworkflows:\n\n # Updates a workload cloud stack\n package_update_plan:\n description: Take a container and perform a package update with possible breakpoints\n\n input:\n - container\n - container_registry\n - ceph_ansible_playbook\n - timeout: 240\n - queue_name: tripleo\n - skip_deploy_identifier: False\n - config_dir: '/tmp/'\n\n tags:\n - tripleo-common-managed\n\n tasks:\n update:\n action: tripleo.package_update.update_stack\n input:\n timeout: <% $.timeout %>\n container: <% $.container %>\n container_registry: <% $.container_registry %>\n ceph_ansible_playbook: <% $.ceph_ansible_playbook %>\n on-success: clean_plan\n on-error: set_update_failed\n\n clean_plan:\n action: tripleo.plan.update_plan_environment\n input:\n container: <% $.container %>\n parameter: CephAnsiblePlaybook\n env_key: parameter_defaults\n delete: true\n on-success: send_message\n on-error: set_update_failed\n\n\n set_update_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(update).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.package_update_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n get_config:\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_config:\n action: tripleo.config.get_overcloud_config container=<% $.container %>\n publish:\n status: SUCCESS\n message: <% task().result %>\n publish-on-error:\n status: FAILED\n message: Init Minor update failed\n on-complete: send_message\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.package_update_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n update_nodes:\n description: Take a container and perform an update nodes by nodes\n\n input:\n - node_user: heat-admin\n - nodes\n - playbook\n - inventory_file\n - ansible_queue_name: tripleo\n - module_path: /usr/share/ansible-modules\n - ansible_extra_env_variables:\n ANSIBLE_LOG_PATH: /var/log/mistral/package_update.log\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n - verbosity: 1\n - work_dir: /var/lib/mistral\n - skip_tags: ''\n\n tags:\n - tripleo-common-managed\n\n tasks:\n download_config:\n action: tripleo.config.download_config\n input:\n work_dir: <% $.work_dir %>/<% execution().id %>\n on-success: get_private_key\n on-error: node_update_failed\n\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: node_update\n\n node_update:\n action: tripleo.ansible-playbook\n input:\n inventory: <% $.inventory_file %>\n playbook: <% $.work_dir %>/<% execution().id %>/<% $.playbook %>\n remote_user: <% $.node_user %>\n become: true\n become_user: root\n verbosity: <% $.verbosity %>\n ssh_private_key: <% $.private_key %>\n extra_env_variables: <% $.ansible_extra_env_variables %>\n limit_hosts: <% $.nodes %>\n module_path: <% $.module_path %>\n queue_name: <% $.ansible_queue_name %>\n execution_id: <% execution().id %>\n skip_tags: <% $.skip_tags %>\n trash_output: true\n on-success:\n - node_update_passed: <% task().result.returncode = 0 %>\n - node_update_failed: <% task().result.returncode != 0 %>\n on-error: node_update_failed\n publish:\n output: <% task().result %>\n\n node_update_passed:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: Updated nodes - <% $.nodes %>\n\n node_update_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: Failed to update nodes - <% $.nodes %>, please see the logs.\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.ansible_queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.update_nodes\n payload:\n status: <% $.status %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n update_converge_plan:\n description: Take a container and perform the converge for minor update\n\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n remove_noop:\n action: tripleo.plan.remove_noop_deploystep\n input:\n container: <% $.container %>\n on-success: send_message\n on-error: set_update_failed\n\n set_update_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(remove_noop).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.update_converge_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n converge_upgrade_plan:\n description: Take a container and perform the converge step of a major upgrade\n\n input:\n - container\n - timeout: 240\n - queue_name: tripleo\n - skip_deploy_identifier: False\n\n tags:\n - tripleo-common-managed\n\n tasks:\n remove_noop:\n action: tripleo.plan.remove_noop_deploystep\n input:\n container: <% $.container %>\n on-success: send_message\n on-error: set_update_failed\n\n set_update_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(upgrade_converge).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.major_upgrade.v1.converge_upgrade_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n ffwd_upgrade_converge_plan:\n description: ffwd-upgrade converge removes DeploymentSteps no-op from plan\n\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n remove_noop:\n action: tripleo.plan.remove_noop_deploystep\n input:\n container: <% $.container %>\n on-success: send_message\n on-error: set_update_failed\n\n set_update_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(remove_noop).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.ffwd_upgrade_converge_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.package_update.v1", "tags": [], "created_at": "2018-06-26 04:26:45", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "fc9b15c1-cc31-428d-8ad6-3c6babac7f28"}, {"definition": "---\nversion: '2.0'\nname: tripleo.skydive_ansible.v1\ndescription: TripleO manages Skydive with skydive-ansible\n\nworkflows:\n skydive_install:\n # allows for additional extra_vars via workflow input\n input:\n - ansible_playbook_verbosity: 0\n - ansible_extra_env_variables:\n ANSIBLE_ROLES_PATH: /usr/share/skydive-ansible/roles/\n ANSIBLE_RETRY_FILES_ENABLED: 'False'\n ANSIBLE_LOG_PATH: /var/log/mistral/skydive-install-workflow.log\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n - skydive_ansible_extra_vars: {}\n - skydive_ansible_playbook: /usr/share/skydive-ansible/playbook.yml.sample\n tags:\n - tripleo-common-managed\n tasks:\n set_blacklisted_ips:\n publish:\n blacklisted_ips: <% env().get('blacklisted_ip_addresses', []) %>\n on-success: set_ip_lists\n set_ip_lists:\n publish:\n agent_ips: <% let(root => $) -> env().get('service_ips', {}).get('skydive_agent_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n analyzer_ips: <% let(root => $) -> env().get('service_ips', {}).get('skydive_analyzer_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n on-success: enable_ssh_admin\n enable_ssh_admin:\n workflow: tripleo.access.v1.enable_ssh_admin\n input:\n ssh_servers: <% ($.agent_ips + $.analyzer_ips).toSet() %>\n on-success: get_private_key\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: set_fork_count\n set_fork_count:\n publish: # unique list of all IPs: make each list a set, take unions and count\n fork_count: <% min($.agent_ips.toSet().union($.analyzer_ips.toSet()).count(), 100) %> # don't use >100 forks\n on-success: set_role_vars\n set_role_vars:\n publish:\n # NOTE(sbaubeau): collect role settings from all tht roles\n agent_vars: <% env().get('role_merged_configs', {}).values().select($.get('skydive_agent_ansible_vars', {})).aggregate($1 + $2) %>\n analyzer_vars: <% env().get('role_merged_configs', {}).values().select($.get('skydive_analyzer_ansible_vars', {})).aggregate($1 + $2) %>\n on-success: build_extra_vars\n build_extra_vars:\n publish:\n # NOTE(sbaubeau): merge vars from all ansible roles\n extra_vars: <% $.agent_vars + $.analyzer_vars + $.skydive_ansible_extra_vars %>\n on-success: skydive_install\n skydive_install:\n action: tripleo.ansible-playbook\n input:\n inventory:\n agents:\n hosts: <% $.agent_ips.toDict($, {}) %>\n analyzers:\n hosts: <% $.analyzer_ips.toDict($, {}) %>\n playbook: <% $.skydive_ansible_playbook %>\n remote_user: tripleo-admin\n become: true\n become_user: root\n verbosity: <% $.ansible_playbook_verbosity %>\n forks: <% $.fork_count %>\n ssh_private_key: <% $.private_key %>\n extra_env_variables: <% $.ansible_extra_env_variables %>\n extra_vars: <% $.extra_vars %>\n publish:\n output: <% task().result %>\n", "name": "tripleo.skydive_ansible.v1", "tags": [], "created_at": "2018-06-26 04:26:46", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "c219fcf4-2cd3-45c9-8383-f5865917e2c2"}, {"definition": "---\nversion: '2.0'\nname: tripleo.undercloud_backup.v1\ndescription: TripleO Undercloud backup workflows\n\nworkflows:\n\n backup:\n description: This workflow will launch the Undercloud backup\n tags:\n - tripleo-common-managed\n input:\n - sources_path: '/home/stack/'\n - queue_name: tripleo\n tasks:\n # Action to know if there is enough available space\n # to run the Undercloud backup\n get_free_space:\n action: tripleo.undercloud.get_free_space\n publish:\n status: SUCCESS\n message: <% task().result %>\n free_space: <% task().result %>\n on-success: create_backup_dir\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # We create a temp directory to store the Undercloud\n # backup\n create_backup_dir:\n action: tripleo.undercloud.create_backup_dir\n publish:\n status: SUCCESS\n message: <% task().result %>\n backup_path: <% task().result %>\n on-success: get_database_credentials\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # The Undercloud database password for the root\n # user is stored in a Mistral environment, we\n # need the password in order to run the database dump\n get_database_credentials:\n action: mistral.environments_get name='tripleo.undercloud-config'\n publish:\n status: SUCCESS\n message: <% task().result %>\n undercloud_db_password: <% task(get_database_credentials).result.variables.undercloud_db_password %>\n on-success: create_database_backup\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # Run the DB dump of all the databases and store the result\n # in the temporary folder\n create_database_backup:\n input:\n path: <% $.backup_path.path %>\n dbuser: root\n dbpassword: <% $.undercloud_db_password %>\n action: tripleo.undercloud.create_database_backup\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: create_fs_backup\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # This action will run the fs backup\n create_fs_backup:\n input:\n sources_path: <% $.sources_path %>\n path: <% $.backup_path.path %>\n action: tripleo.undercloud.create_file_system_backup\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: upload_backup\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # This action will push the backup to swift\n upload_backup:\n input:\n backup_path: <% $.backup_path.path %>\n action: tripleo.undercloud.upload_backup_to_swift\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: cleanup_backup\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # This action will remove the backup temp folder\n cleanup_backup:\n input:\n path: <% $.backup_path.path %>\n action: tripleo.undercloud.remove_temp_dir\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: send_message\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # Sending a message to show that the backup finished\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.undercloud_backup.v1.launch\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n execution: <% execution() %>\n message: <% $.get('message', '') %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.undercloud_backup.v1", "tags": [], "created_at": "2018-06-26 04:26:46", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "fed5ce09-f486-43a5-81f8-2924993d83eb"}, {"definition": "---\nversion: '2.0'\nname: tripleo.derive_params.v1\ndescription: TripleO Workflows to derive deployment parameters from the introspected data\n\nworkflows:\n\n derive_parameters:\n description: The main workflow for deriving parameters from the introspected data\n\n input:\n - plan: overcloud\n - queue_name: tripleo\n - user_inputs: {}\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_flattened_parameters:\n action: tripleo.parameters.get_flatten container=<% $.plan %>\n publish:\n environment_parameters: <% task().result.environment_parameters %>\n heat_resource_tree: <% task().result.heat_resource_tree %>\n on-success:\n - get_roles: <% $.environment_parameters and $.heat_resource_tree %>\n - set_status_failed_get_flattened_parameters: <% (not $.environment_parameters) or (not $.heat_resource_tree) %>\n on-error: set_status_failed_get_flattened_parameters\n\n get_roles:\n action: tripleo.role.list container=<% $.plan %>\n publish:\n role_name_list: <% task().result %>\n on-success:\n - get_valid_roles: <% $.role_name_list %>\n - set_status_failed_get_roles: <% not $.role_name_list %>\n on-error: set_status_failed_on_error_get_roles\n\n # Obtain only the roles which has count > 0, by checking <RoleName>Count parameter, like ComputeCount\n get_valid_roles:\n publish:\n valid_role_name_list: <% let(hr => $.heat_resource_tree.parameters) -> $.role_name_list.where(int($hr.get(concat($, 'Count'), {}).get('default', 0)) > 0) %>\n on-success:\n - for_each_role: <% $.valid_role_name_list %>\n - set_status_failed_get_valid_roles: <% not $.valid_role_name_list %>\n\n # Execute the basic preparation workflow for each role to get introspection data\n for_each_role:\n with-items: role_name in <% $.valid_role_name_list %>\n concurrency: 1\n workflow: _derive_parameters_per_role\n input:\n plan: <% $.plan %>\n role_name: <% $.role_name %>\n environment_parameters: <% $.environment_parameters %>\n heat_resource_tree: <% $.heat_resource_tree %>\n user_inputs: <% $.user_inputs %>\n publish:\n # Gets all the roles derived parameters as dictionary\n result: <% task().result.select($.get('derived_parameters', {})).sum() %>\n on-success: reset_derive_parameters_in_plan\n on-error: set_status_failed_for_each_role\n\n reset_derive_parameters_in_plan:\n action: tripleo.parameters.reset\n input:\n container: <% $.plan %>\n key: 'derived_parameters'\n on-success:\n # Add the derived parameters to the deployment plan only when $.result\n # (the derived parameters) is non-empty. Otherwise, we're done.\n - update_derive_parameters_in_plan: <% $.result %>\n - send_message: <% not $.result %>\n on-error: set_status_failed_reset_derive_parameters_in_plan\n\n update_derive_parameters_in_plan:\n action: tripleo.parameters.update\n input:\n container: <% $.plan %>\n key: 'derived_parameters'\n parameters: <% $.get('result', {}) %>\n on-success: send_message\n on-error: set_status_failed_update_derive_parameters_in_plan\n\n set_status_failed_get_flattened_parameters:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_flattened_parameters).result %>\n\n set_status_failed_get_roles:\n on-success: send_message\n publish:\n status: FAILED\n message: \"Unable to determine the list of roles in the deployment plan\"\n\n set_status_failed_on_error_get_roles:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_roles).result %>\n\n set_status_failed_get_valid_roles:\n on-success: send_message\n publish:\n status: FAILED\n message: 'Unable to determine the list of valid roles in the deployment plan.'\n\n set_status_failed_for_each_role:\n on-success: update_message_format\n publish:\n status: FAILED\n # gets the status and message for all roles from task result.\n message: <% task(for_each_role).result.select(dict('role_name' => $.role_name, 'status' => $.get('status', 'SUCCESS'), 'message' => $.get('message', ''))) %>\n\n update_message_format:\n on-success: send_message\n publish:\n # updates the message format(Role 'role name': message) for each roles which are failed and joins the message list as string with ', ' separator.\n message: <% $.message.where($.get('status', 'SUCCESS') != 'SUCCESS').select(concat(\"Role '{}':\".format($.role_name), \" \", $.get('message', '(error unknown)'))).join(', ') %>\n\n set_status_failed_reset_derive_parameters_in_plan:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(reset_derive_parameters_in_plan).result %>\n\n set_status_failed_update_derive_parameters_in_plan:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(update_derive_parameters_in_plan).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.derive_params.v1.derive_parameters\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n result: <% $.get('result', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n\n\n _derive_parameters_per_role:\n description: >\n Workflow which runs per role to get the introspection data on the first matching node assigned to role.\n Once introspection data is fetched, this worklow will trigger the actual derive parameters workflow\n input:\n - plan\n - role_name\n - environment_parameters\n - heat_resource_tree\n - user_inputs\n\n output:\n derived_parameters: <% $.get('derived_parameters', {}) %>\n # Need role_name in output parameter to display the status for all roles in main workflow when any role fails here.\n role_name: <% $.role_name %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_role_info:\n workflow: _get_role_info\n input:\n role_name: <% $.role_name %>\n heat_resource_tree: <% $.heat_resource_tree %>\n publish:\n role_features: <% task().result.get('role_features', []) %>\n role_services: <% task().result.get('role_services', []) %>\n on-success:\n # Continue only if there are features associated with this role. Otherwise, we're done.\n - get_flavor_name: <% $.role_features %>\n on-error: set_status_failed_get_role_info\n\n # Getting introspection data workflow, which will take care of\n # 1) profile and flavor based mapping\n # 2) Nova placement api based mapping\n # Currently we have implemented profile and flavor based mapping\n # TODO-Nova placement api based mapping is pending, we will enchance it later.\n get_flavor_name:\n publish:\n flavor_name: <% let(param_name => concat('Overcloud', $.role_name, 'Flavor').replace('OvercloudControllerFlavor', 'OvercloudControlFlavor')) -> $.heat_resource_tree.parameters.get($param_name, {}).get('default', '') %>\n on-success:\n - get_profile_name: <% $.flavor_name %>\n - set_status_failed_get_flavor_name: <% not $.flavor_name %>\n\n get_profile_name:\n action: tripleo.parameters.get_profile_of_flavor flavor_name=<% $.flavor_name %>\n publish:\n profile_name: <% task().result %>\n on-success: get_profile_node\n on-error: set_status_failed_get_profile_name\n\n get_profile_node:\n workflow: tripleo.baremetal.v1.nodes_with_profile\n input:\n profile: <% $.profile_name %>\n publish:\n profile_node_uuid: <% task().result.matching_nodes.first('') %>\n on-success:\n - get_introspection_data: <% $.profile_node_uuid %>\n - set_status_failed_no_matching_node_get_profile_node: <% not $.profile_node_uuid %>\n on-error: set_status_failed_on_error_get_profile_node\n\n get_introspection_data:\n action: baremetal_introspection.get_data uuid=<% $.profile_node_uuid %>\n publish:\n hw_data: <% task().result %>\n # Establish an empty dictionary of derived_parameters prior to\n # invoking the individual \"feature\" algorithms\n derived_parameters: <% dict() %>\n on-success: handle_dpdk_feature\n on-error: set_status_failed_get_introspection_data\n\n handle_dpdk_feature:\n on-success:\n - get_dpdk_derive_params: <% $.role_features.contains('DPDK') %>\n - handle_sriov_feature: <% not $.role_features.contains('DPDK') %>\n\n get_dpdk_derive_params:\n workflow: tripleo.derive_params_formulas.v1.dpdk_derive_params\n input:\n plan: <% $.plan %>\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n user_inputs: <% $.user_inputs %>\n publish:\n derived_parameters: <% task().result.get('derived_parameters', {}) %>\n on-success: handle_sriov_feature\n on-error: set_status_failed_get_dpdk_derive_params\n\n handle_sriov_feature:\n on-success:\n - get_sriov_derive_params: <% $.role_features.contains('SRIOV') %>\n - handle_host_feature: <% not $.role_features.contains('SRIOV') %>\n\n get_sriov_derive_params:\n workflow: tripleo.derive_params_formulas.v1.sriov_derive_params\n input:\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n derived_parameters: <% $.derived_parameters %>\n publish:\n derived_parameters: <% task().result.get('derived_parameters', {}) %>\n on-success: handle_host_feature\n on-error: set_status_failed_get_sriov_derive_params\n\n handle_host_feature:\n on-success:\n - get_host_derive_params: <% $.role_features.contains('HOST') %>\n - handle_hci_feature: <% not $.role_features.contains('HOST') %>\n\n get_host_derive_params:\n workflow: tripleo.derive_params_formulas.v1.host_derive_params\n input:\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n user_inputs: <% $.user_inputs %>\n derived_parameters: <% $.derived_parameters %>\n publish:\n derived_parameters: <% task().result.get('derived_parameters', {}) %>\n on-success: handle_hci_feature\n on-error: set_status_failed_get_host_derive_params\n\n handle_hci_feature:\n on-success:\n - get_hci_derive_params: <% $.role_features.contains('HCI') %>\n\n get_hci_derive_params:\n workflow: tripleo.derive_params_formulas.v1.hci_derive_params\n input:\n role_name: <% $.role_name %>\n environment_parameters: <% $.environment_parameters %>\n heat_resource_tree: <% $.heat_resource_tree %>\n introspection_data: <% $.hw_data %>\n user_inputs: <% $.user_inputs %>\n derived_parameters: <% $.derived_parameters %>\n publish:\n derived_parameters: <% task().result.get('derived_parameters', {}) %>\n on-error: set_status_failed_get_hci_derive_params\n # Done (no more derived parameter features)\n\n set_status_failed_get_role_info:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_role_info).result.get('message', '') %>\n on-success: fail\n\n set_status_failed_get_flavor_name:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% \"Unable to determine flavor for role '{0}'\".format($.role_name) %>\n on-success: fail\n\n set_status_failed_get_profile_name:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_profile_name).result %>\n on-success: fail\n\n set_status_failed_no_matching_node_get_profile_node:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% \"Unable to determine matching node for profile '{0}'\".format($.profile_name) %>\n on-success: fail\n\n set_status_failed_on_error_get_profile_node:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_profile_node).result %>\n on-success: fail\n\n set_status_failed_get_introspection_data:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_introspection_data).result %>\n on-success: fail\n\n set_status_failed_get_dpdk_derive_params:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_dpdk_derive_params).result %>\n on-success: fail\n\n set_status_failed_get_sriov_derive_params:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_sriov_derive_params).result %>\n on-success: fail\n\n set_status_failed_get_host_derive_params:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_host_derive_params).result %>\n on-success: fail\n\n set_status_failed_get_hci_derive_params:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_hci_derive_params).result %>\n on-success: fail\n\n\n _get_role_info:\n description: >\n Workflow that determines the list of derived parameter features (DPDK,\n HCI, etc.) for a role based on the services assigned to the role.\n\n input:\n - role_name\n - heat_resource_tree\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_resource_chains:\n publish:\n resource_chains: <% $.heat_resource_tree.resources.values().where($.get('type', '') = 'OS::Heat::ResourceChain') %>\n on-success:\n - get_role_chain: <% $.resource_chains %>\n - set_status_failed_get_resource_chains: <% not $.resource_chains %>\n\n get_role_chain:\n publish:\n role_chain: <% let(chain_name => concat($.role_name, 'ServiceChain'))-> $.heat_resource_tree.resources.values().where($.name = $chain_name).first({}) %>\n on-success:\n - get_service_chain: <% $.role_chain %>\n - set_status_failed_get_role_chain: <% not $.role_chain %>\n\n get_service_chain:\n publish:\n service_chain: <% let(resources => $.role_chain.resources)-> $.resource_chains.where($resources.contains($.id)).first('') %>\n on-success:\n - get_role_services: <% $.service_chain %>\n - set_status_failed_get_service_chain: <% not $.service_chain %>\n\n get_role_services:\n publish:\n role_services: <% let(resources => $.heat_resource_tree.resources)-> $.service_chain.resources.select($resources.get($)) %>\n on-success:\n - check_features: <% $.role_services %>\n - set_status_failed_get_role_services: <% not $.role_services %>\n\n check_features:\n on-success: build_feature_dict\n publish:\n # The role supports the DPDK feature if the NeutronDatapathType parameter is present\n dpdk: <% let(resources => $.heat_resource_tree.resources) -> $.role_services.any($.get('parameters', []).contains('NeutronDatapathType') or $.get('resources', []).select($resources.get($)).any($.get('parameters', []).contains('NeutronDatapathType'))) %>\n\n # The role supports the DPDK feature in ODL if the OvsEnableDpdk parameter value is true in role parameters.\n odl_dpdk: <% let(role => $.role_name) -> $.heat_resource_tree.parameters.get(concat($role, 'Parameters'), {}).get('default', {}).get('OvsEnableDpdk', false) %>\n\n # The role supports the SRIOV feature if it includes NeutronSriovAgent services.\n sriov: <% $.role_services.any($.get('type', '').endsWith('::NeutronSriovAgent')) %>\n\n # The role supports the HCI feature if it includes both NovaCompute and CephOSD services.\n hci: <% $.role_services.any($.get('type', '').endsWith('::NovaCompute')) and $.role_services.any($.get('type', '').endsWith('::CephOSD')) %>\n\n build_feature_dict:\n on-success: filter_features\n publish:\n feature_dict: <% dict(DPDK => ($.dpdk or $.odl_dpdk), SRIOV => $.sriov, HOST => ($.dpdk or $.odl_dpdk or $.sriov), HCI => $.hci) %>\n\n filter_features:\n publish:\n # The list of features that are enabled (i.e. are true in the feature_dict).\n role_features: <% let(feature_dict => $.feature_dict)-> $feature_dict.keys().where($feature_dict[$]) %>\n\n set_status_failed_get_resource_chains:\n publish:\n message: <% 'Unable to locate any resource chains in the heat resource tree' %>\n on-success: fail\n\n set_status_failed_get_role_chain:\n publish:\n message: <% \"Unable to determine the service chain resource for role '{0}'\".format($.role_name) %>\n on-success: fail\n\n set_status_failed_get_service_chain:\n publish:\n message: <% \"Unable to determine the service chain for role '{0}'\".format($.role_name) %>\n on-success: fail\n\n set_status_failed_get_role_services:\n publish:\n message: <% \"Unable to determine list of services for role '{0}'\".format($.role_name) %>\n on-success: fail\n", "name": "tripleo.derive_params.v1", "tags": [], "created_at": "2018-06-26 04:26:47", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "dac295e6-ddf0-41ad-b133-ba413692718b"}, {"definition": "---\nversion: '2.0'\nname: tripleo.swift_rings_backup.v1\ndescription: TripleO Swift Rings backup container Deployment Workflow v1\n\nworkflows:\n\n create_swift_rings_backup_container_plan:\n description: >\n This plan ensures existence of container for Swift Rings backup.\n input:\n - container\n - queue_name: tripleo\n tags:\n - tripleo-common-managed\n tasks:\n\n swift_rings_container:\n publish:\n swift_rings_container: \"<% $.container %>-swift-rings\"\n swift_rings_tar: \"swift-rings.tar.gz\"\n on-complete: check_container\n\n check_container:\n action: swift.head_container container=<% $.swift_rings_container %>\n on-success: get_tempurl\n on-error: create_container\n\n create_container:\n action: swift.put_container container=<% $.swift_rings_container %>\n on-error: set_create_container_failed\n on-success: get_tempurl\n\n get_tempurl:\n action: tripleo.swift.tempurl\n on-success: set_get_tempurl\n input:\n container: <% $.swift_rings_container %>\n obj: <% $.swift_rings_tar %>\n\n set_get_tempurl:\n action: tripleo.parameters.update\n input:\n parameters:\n SwiftRingGetTempurl: <% task(get_tempurl).result %>\n container: <% $.container %>\n on-success: put_tempurl\n\n put_tempurl:\n action: tripleo.swift.tempurl\n on-success: set_put_tempurl\n input:\n container: <% $.swift_rings_container %>\n obj: <% $.swift_rings_tar %>\n method: \"PUT\"\n\n set_put_tempurl:\n action: tripleo.parameters.update\n input:\n parameters:\n SwiftRingPutTempurl: <% task(put_tempurl).result %>\n container: <% $.container %>\n on-success: set_status_success\n on-error: set_put_tempurl_failed\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(set_put_tempurl).result %>\n\n set_put_tempurl_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(set_put_tempurl).result %>\n\n set_create_container_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_container).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.swift_rings_backup.v1", "tags": [], "created_at": "2018-06-26 04:26:47", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "f74f74ae-84a0-46fa-b70b-2c871323eb99"}]} > >2018-06-26 11:15:08,948 DEBUG: HTTP GET http://192.0.3.1:8989/v2/workbooks 200 >2018-06-26 11:15:08,951 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.access.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:08,960 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.access.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:08,960 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:08 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:08,960 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.access.v1 204 >2018-06-26 11:15:08,960 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.stack.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:08,968 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.stack.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:08,968 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:08 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:08,968 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.stack.v1 204 >2018-06-26 11:15:08,969 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.validations.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:08,976 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.validations.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:08,976 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:08 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:08,976 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.validations.v1 204 >2018-06-26 11:15:08,977 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.derive_params_formulas.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:08,984 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.derive_params_formulas.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:08,985 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:08 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:08,985 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.derive_params_formulas.v1 204 >2018-06-26 11:15:08,985 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.plan_management.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:08,993 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.plan_management.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:08,993 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:08 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:08,993 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.plan_management.v1 204 >2018-06-26 11:15:08,993 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.support.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,001 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.support.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:09,001 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,001 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.support.v1 204 >2018-06-26 11:15:09,001 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.deployment.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,009 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.deployment.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:09,010 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,010 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.deployment.v1 204 >2018-06-26 11:15:09,010 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.octavia_post.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,017 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.octavia_post.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:09,018 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,018 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.octavia_post.v1 204 >2018-06-26 11:15:09,018 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.baremetal.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,026 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.baremetal.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:09,026 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,026 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.baremetal.v1 204 >2018-06-26 11:15:09,026 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.scale.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,034 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.scale.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:09,035 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,035 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.scale.v1 204 >2018-06-26 11:15:09,035 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.storage.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,042 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.storage.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:09,043 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,043 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.storage.v1 204 >2018-06-26 11:15:09,043 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.swift_ring.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,052 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.swift_ring.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:09,053 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,053 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.swift_ring.v1 204 >2018-06-26 11:15:09,053 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.fernet_keys.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,060 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.fernet_keys.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:09,061 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,061 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.fernet_keys.v1 204 >2018-06-26 11:15:09,061 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.networks.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,068 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.networks.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:09,069 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,069 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.networks.v1 204 >2018-06-26 11:15:09,069 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.package_update.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,077 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.package_update.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:09,077 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,077 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.package_update.v1 204 >2018-06-26 11:15:09,077 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.skydive_ansible.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,085 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.skydive_ansible.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:09,085 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,085 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.skydive_ansible.v1 204 >2018-06-26 11:15:09,085 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.undercloud_backup.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,093 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.undercloud_backup.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:09,093 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,093 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.undercloud_backup.v1 204 >2018-06-26 11:15:09,093 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.derive_params.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,101 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.derive_params.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:09,101 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,101 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.derive_params.v1 204 >2018-06-26 11:15:09,102 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.swift_rings_backup.v1 -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,109 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workbooks/tripleo.swift_rings_backup.v1 HTTP/1.1" 204 0 >2018-06-26 11:15:09,110 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,110 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workbooks/tripleo.swift_rings_backup.v1 204 >2018-06-26 11:15:09,110 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/workflows -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,155 DEBUG: http://192.0.3.1:8989 "GET /v2/workflows HTTP/1.1" 200 267344 >2018-06-26 11:15:09,204 DEBUG: RESP: [200] Content-Length: 267344 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: {"workflows": [{"definition": "create_admin_via_nova:\n workflow: tripleo.access.v1.create_admin_via_nova\n input:\n queue_name: <% $.queue_name %>\n ssh_servers: <% $.ssh_servers %>\n tasks: <% $.create_admin_tasks %>\n overcloud_admin: <% $.overcloud_admin %>\n\n# SSH variant\n", "name": "tripleo.access.v1.create_admin_via_nova", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:33", "namespace": "", "updated_at": null, "scope": "private", "input": "tasks, queue_name=tripleo, ssh_servers=[], overcloud_admin=tripleo-admin, ansible_extra_env_variables={u'ANSIBLE_HOST_KEY_CHECKING': u'False'}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "6a286f02-77bc-4bfc-af6c-1ff29edb0b11"}, {"definition": "enable_ssh_admin:\n description: >-\n This workflow creates an admin user on the overcloud nodes,\n which can then be used for connecting for automated\n administrative or deployment tasks, e.g. via Ansible. The\n workflow can be used both for Nova-managed and split-stack\n deployments, assuming the correct input values are passed\n in. The workflow defaults to Nova-managed approach, for which no\n additional parameters need to be supplied. In case of\n split-stack, temporary ssh connection details (user, key, list\n of servers) need to be provided -- these are only used\n temporarily to create the actual ssh admin user for use by\n Mistral.\n tags:\n - tripleo-common-managed\n input:\n - ssh_private_key: null\n - ssh_user: null\n - ssh_servers: []\n - overcloud_admin: tripleo-admin\n - queue_name: tripleo\n tasks:\n get_pubkey:\n action: tripleo.validations.get_pubkey\n on-success: generate_playbook\n publish:\n pubkey: <% task().result %>\n\n generate_playbook:\n on-success:\n - create_admin_via_nova: <% $.ssh_private_key = null %>\n - create_admin_via_ssh: <% $.ssh_private_key != null %>\n publish:\n create_admin_tasks:\n - name: create user <% $.overcloud_admin %>\n user:\n name: '<% $.overcloud_admin %>'\n - name: grant admin rights to user <% $.overcloud_admin %>\n copy:\n dest: /etc/sudoers.d/<% $.overcloud_admin %>\n content: |\n <% $.overcloud_admin %> ALL=(ALL) NOPASSWD:ALL\n mode: 0440\n - name: ensure .ssh dir exists for user <% $.overcloud_admin %>\n file:\n path: /home/<% $.overcloud_admin %>/.ssh\n state: directory\n owner: <% $.overcloud_admin %>\n group: <% $.overcloud_admin %>\n mode: 0700\n - name: ensure authorized_keys file exists for user <% $.overcloud_admin %>\n file:\n path: /home/<% $.overcloud_admin %>/.ssh/authorized_keys\n state: touch\n owner: <% $.overcloud_admin %>\n group: <% $.overcloud_admin %>\n mode: 0700\n - name: authorize TripleO Mistral key for user <% $.overcloud_admin %>\n lineinfile:\n path: /home/<% $.overcloud_admin %>/.ssh/authorized_keys\n line: <% $.pubkey %>\n regexp: \"Generated by TripleO\"\n\n # Nova variant\n create_admin_via_nova:\n workflow: tripleo.access.v1.create_admin_via_nova\n input:\n queue_name: <% $.queue_name %>\n ssh_servers: <% $.ssh_servers %>\n tasks: <% $.create_admin_tasks %>\n overcloud_admin: <% $.overcloud_admin %>\n\n # SSH variant\n create_admin_via_ssh:\n workflow: tripleo.access.v1.create_admin_via_ssh\n input:\n ssh_private_key: <% $.ssh_private_key %>\n ssh_user: <% $.ssh_user %>\n ssh_servers: <% $.ssh_servers %>\n tasks: <% $.create_admin_tasks %>\n", "name": "tripleo.access.v1.enable_ssh_admin", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:33", "namespace": "", "updated_at": null, "scope": "private", "input": "ssh_private_key=None, ssh_user=None, ssh_servers=[], overcloud_admin=tripleo-admin, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "cfade670-95da-449a-9f19-10b35cd9eb8e"}, {"definition": "create_admin_via_ssh:\n workflow: tripleo.access.v1.create_admin_via_ssh\n input:\n ssh_private_key: <% $.ssh_private_key %>\n ssh_user: <% $.ssh_user %>\n ssh_servers: <% $.ssh_servers %>\n tasks: <% $.create_admin_tasks %>\n", "name": "tripleo.access.v1.create_admin_via_ssh", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:33", "namespace": "", "updated_at": null, "scope": "private", "input": "tasks, ssh_private_key, ssh_user, ssh_servers, ansible_extra_env_variables={u'ANSIBLE_HOST_KEY_CHECKING': u'False'}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "fb69905c-40fa-40fb-a802-50b4cc73b865"}, {"definition": "wait_for_stack_does_not_exist:\n input:\n - stack\n - timeout: 3600\n\n tags:\n - tripleo-common-managed\n\n tasks:\n wait_for_stack_does_not_exist:\n action: heat.stacks_list\n timeout: <% $.timeout %>\n retry:\n delay: 15\n count: <% $.timeout / 15 %>\n continue-on: <% $.stack in task(wait_for_stack_does_not_exist).result.select([$.stack_name, $.id]).flatten() %>\n", "name": "tripleo.stack.v1.wait_for_stack_does_not_exist", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:34", "namespace": "", "updated_at": null, "scope": "private", "input": "stack, timeout=3600", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "1d298e96-bb23-412c-8d93-84bf5634a915"}, {"definition": "delete_stack:\n input:\n - stack\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n delete_the_stack:\n action: heat.stacks_delete stack_id=<% $.stack %>\n on-success: wait_for_stack_does_not_exist\n on-error: delete_the_stack_failed\n\n delete_the_stack_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(delete_the_stack).result %>\n\n wait_for_stack_does_not_exist:\n workflow: tripleo.stack.v1.wait_for_stack_does_not_exist stack=<% $.stack %>\n on-success: send_message\n on-error: wait_for_stack_does_not_exist_failed\n\n wait_for_stack_does_not_exist_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(wait_for_stack_does_not_exist).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.scale.v1.delete_stack\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.stack.v1.delete_stack", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:34", "namespace": "", "updated_at": null, "scope": "private", "input": "stack, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "792c67ae-97ba-42cc-921c-d3cd7d36ef7c"}, {"definition": "wait_for_stack_in_progress:\n input:\n - stack\n - timeout: 600 # 10 minutes. Should not take much longer for a stack to transition to IN_PROGRESS\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n wait_for_stack_status:\n action: heat.stacks_get stack_id=<% $.stack %>\n timeout: <% $.timeout %>\n retry:\n delay: 15\n count: <% $.timeout / 15 %>\n continue-on: <% task().result.stack_status in ['CREATE_COMPLETE', 'CREATE_FAILED', 'UPDATE_COMPLETE', 'UPDATE_FAILED', 'DELETE_FAILED'] %>\n", "name": "tripleo.stack.v1.wait_for_stack_in_progress", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:34", "namespace": "", "updated_at": null, "scope": "private", "input": "stack, timeout=600", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "92082a20-0ef7-429d-a2ac-1f3ede71bd92"}, {"definition": "wait_for_stack_complete_or_failed:\n input:\n - stack\n - timeout: 14400 # 4 hours. Default timeout of stack deployment\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n wait_for_stack_status:\n action: heat.stacks_get stack_id=<% $.stack %>\n timeout: <% $.timeout %>\n retry:\n delay: 15\n count: <% $.timeout / 15 %>\n continue-on: <% task().result.stack_status in ['CREATE_IN_PROGRESS', 'UPDATE_IN_PROGRESS', 'DELETE_IN_PROGRESS'] %>\n", "name": "tripleo.stack.v1.wait_for_stack_complete_or_failed", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:34", "namespace": "", "updated_at": null, "scope": "private", "input": "stack, timeout=14400", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "e9c76e69-1e50-4ab9-bcf2-852e8e9b3da8"}, {"definition": "run_validations:\n input:\n - validation_names: []\n - plan: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n notify_running:\n on-complete: run_validations\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validations\n payload:\n validation_names: <% $.validation_names %>\n plan: <% $.plan %>\n status: RUNNING\n execution: <% execution() %>\n\n run_validations:\n on-success: send_message\n on-error: set_status_failed\n workflow: tripleo.validations.v1.run_validation validation_name=<% $.validation %> plan=<% $.plan %> queue_name=<% $.queue_name %>\n with-items: validation in <% $.validation_names %>\n publish:\n status: SUCCESS\n\n set_status_failed:\n on-complete: send_message\n publish:\n status: FAILED\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validations\n payload:\n validation_names: <% $.validation_names %>\n plan: <% $.plan %>\n status: <% $.get('status', 'SUCCESS') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.validations.v1.run_validations", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:35", "namespace": "", "updated_at": null, "scope": "private", "input": "validation_names=[], plan=overcloud, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "16aa5b62-7b49-4b47-a564-5a9203d9a99e"}, {"definition": "check_pre_deployment_validations:\n input:\n - deploy_kernel_name: 'bm-deploy-kernel'\n - deploy_ramdisk_name: 'bm-deploy-ramdisk'\n - roles_info: {}\n - stack_id: overcloud\n - parameters: {}\n - default_role_counts: {}\n - run_validations: true\n - queue_name: tripleo\n\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n flavors: <% $.flavors %>\n statistics: <% $.statistics %>\n tags:\n - tripleo-common-managed\n tasks:\n init_messages:\n on-success: check_boot_images\n publish:\n errors: []\n warnings: []\n\n check_boot_images:\n workflow: check_boot_images\n input:\n deploy_kernel_name: <% $.deploy_kernel_name %>\n deploy_ramdisk_name: <% $.deploy_ramdisk_name %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n kernel_id: <% task().result.get('kernel_id') %>\n ramdisk_id: <% task().result.get('ramdisk_id') %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n kernel_id: <% task().result.get('kernel_id') %>\n ramdisk_id: <% task().result.get('ramdisk_id') %>\n status: FAILED\n on-success: collect_flavors\n on-error: collect_flavors\n\n collect_flavors:\n workflow: collect_flavors\n input:\n roles_info: <% $.roles_info %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n flavors: <% task().result.get('flavors') %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n flavors: <% task().result.get('flavors') %>\n status: FAILED\n on-success: check_ironic_boot_configuration\n on-error: check_ironic_boot_configuration\n\n check_ironic_boot_configuration:\n workflow: check_ironic_boot_configuration\n input:\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n status: FAILED\n on-success: check_default_nodes_count\n on-error: check_default_nodes_count\n\n check_default_nodes_count:\n workflow: check_default_nodes_count\n # ironic-nova sync happens once in two minutes\n retry: count=12 delay=10\n input:\n stack_id: <% $.stack_id %>\n parameters: <% $.parameters %>\n default_role_counts: <% $.default_role_counts %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n statistics: <% task().result.get('statistics') %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n statistics: <% task().result.get('statistics') %>\n status: FAILED\n on-success: verify_profiles\n # Do not confuse user with info about profiles if the nodes\n # count is off in the first place. Skip directly to\n # send_message. (bug 1703942)\n on-error: send_message\n\n verify_profiles:\n workflow: verify_profiles\n input:\n flavors: <% $.flavors %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n status: FAILED\n on-success: send_message\n on-error: send_message\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.check_hypervisor_stats\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n flavors: <% $.flavors %>\n statistics: <% $.statistics %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.validations.v1.check_pre_deployment_validations", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:35", "namespace": "", "updated_at": null, "scope": "private", "input": "deploy_kernel_name=bm-deploy-kernel, deploy_ramdisk_name=bm-deploy-ramdisk, roles_info={}, stack_id=overcloud, parameters={}, default_role_counts={}, run_validations=True, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "1f708077-8341-4ca6-8606-d01e38852308"}, {"definition": "check_ironic_boot_configuration:\n input:\n - kernel_id: null\n - ramdisk_id: null\n - run_validations: true\n - queue_name: tripleo\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_run_validations:\n on-complete:\n - get_ironic_nodes: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n get_ironic_nodes:\n action: ironic.node_list\n input:\n provision_state: available\n maintenance: false\n detail: true\n on-success: check_node_boot_configuration\n publish:\n nodes: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n check_node_boot_configuration:\n action: tripleo.validations.check_node_boot_configuration\n input:\n node: <% $.node %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n with-items: node in <% $.nodes %>\n on-success: send_message\n publish:\n errors: <% task().result.errors.flatten() %>\n warnings: <% task().result.warnings.flatten() %>\n on-error: send_message\n publish-on-error:\n errors: <% task().result.errors.flatten() %>\n warnings: <% task().result.warnings.flatten() %>\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.check_ironic_boot_configuration\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.validations.v1.check_ironic_boot_configuration", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:35", "namespace": "", "updated_at": null, "scope": "private", "input": "kernel_id=None, ramdisk_id=None, run_validations=True, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "24741f05-dd0e-4d75-bbc7-4c1fdceb87a1"}, {"definition": "verify_profiles:\n input:\n - flavors: []\n - run_validations: true\n - queue_name: tripleo\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_run_validations:\n on-complete:\n - get_ironic_nodes: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n get_ironic_nodes:\n action: ironic.node_list\n input:\n maintenance: false\n detail: true\n on-success: verify_profiles\n publish:\n nodes: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n verify_profiles:\n action: tripleo.validations.verify_profiles\n input:\n nodes: <% $.nodes %>\n flavors: <% $.flavors %>\n on-success: send_message\n publish:\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n on-error: send_message\n publish-on-error:\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.verify_profiles\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.validations.v1.verify_profiles", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:35", "namespace": "", "updated_at": null, "scope": "private", "input": "flavors=[], run_validations=True, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "47d3b416-4898-418f-824c-d5cfcf1f5bf6"}, {"definition": "list_groups:\n tags:\n - tripleo-common-managed\n tasks:\n find_groups:\n action: tripleo.validations.list_groups\n", "name": "tripleo.validations.v1.list_groups", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:35", "namespace": "", "updated_at": null, "scope": "private", "input": "", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "49a4b037-757b-44fd-9273-edb5b430bdee"}, {"definition": "run_groups:\n input:\n - group_names: []\n - plan: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n find_validations:\n on-success: notify_running\n action: tripleo.validations.list_validations groups=<% $.group_names %>\n publish:\n validations: <% task().result %>\n\n notify_running:\n on-complete: run_validation_group\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validations\n payload:\n group_names: <% $.group_names %>\n validation_names: <% $.validations.id %>\n plan: <% $.plan %>\n status: RUNNING\n execution: <% execution() %>\n\n run_validation_group:\n on-success: send_message\n on-error: set_status_failed\n workflow: tripleo.validations.v1.run_validation validation_name=<% $.validation %> plan=<% $.plan %> queue_name=<% $.queue_name %>\n with-items: validation in <% $.validations.id %>\n publish:\n status: SUCCESS\n\n set_status_failed:\n on-complete: send_message\n publish:\n status: FAILED\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_groups\n payload:\n group_names: <% $.group_names %>\n validation_names: <% $.validations.id %>\n plan: <% $.plan %>\n status: <% $.get('status', 'SUCCESS') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.validations.v1.run_groups", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:35", "namespace": "", "updated_at": null, "scope": "private", "input": "group_names=[], plan=overcloud, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "4ad24060-b58e-4ca8-bac4-520450f36d01"}, {"definition": "collect_flavors:\n input:\n - roles_info: {}\n - run_validations: true\n - queue_name: tripleo\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n flavors: <% $.flavors %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_run_validations:\n on-complete:\n - check_flavors: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n check_flavors:\n action: tripleo.validations.check_flavors\n input:\n roles_info: <% $.roles_info %>\n on-success: send_message\n publish:\n flavors: <% task().result.flavors %>\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n on-error: send_message\n publish-on-error:\n flavors: {}\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.collect_flavors\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n flavors: <% $.flavors %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.validations.v1.collect_flavors", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:35", "namespace": "", "updated_at": null, "scope": "private", "input": "roles_info={}, run_validations=True, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "8f504d2a-124b-4de7-b20f-e66093211655"}, {"definition": "check_boot_images:\n input:\n - deploy_kernel_name: 'bm-deploy-kernel'\n - deploy_ramdisk_name: 'bm-deploy-ramdisk'\n - run_validations: true\n - queue_name: tripleo\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n tags:\n - tripleo-common-managed\n tasks:\n check_run_validations:\n on-complete:\n - get_images: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n get_images:\n action: glance.images_list\n on-success: check_images\n publish:\n images: <% task().result %>\n\n check_images:\n action: tripleo.validations.check_boot_images\n input:\n images: <% $.images %>\n deploy_kernel_name: <% $.deploy_kernel_name %>\n deploy_ramdisk_name: <% $.deploy_ramdisk_name %>\n on-success: send_message\n publish:\n kernel_id: <% task().result.kernel_id %>\n ramdisk_id: <% task().result.ramdisk_id %>\n warnings: <% task().result.warnings %>\n errors: <% task().result.errors %>\n on-error: send_message\n publish-on-error:\n kernel_id: <% task().result.kernel_id %>\n ramdisk_id: <% task().result.ramdisk_id %>\n warnings: <% task().result.warnings %>\n errors: <% task().result.errors %>\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.check_boot_images\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.validations.v1.check_boot_images", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:35", "namespace": "", "updated_at": null, "scope": "private", "input": "deploy_kernel_name=bm-deploy-kernel, deploy_ramdisk_name=bm-deploy-ramdisk, run_validations=True, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "96463f45-e016-4c8e-8f41-605ebd23f870"}, {"definition": "run_validation:\n input:\n - validation_name\n - plan: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n notify_running:\n on-complete: run_validation\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validation\n payload:\n validation_name: <% $.validation_name %>\n plan: <% $.plan %>\n status: RUNNING\n execution: <% execution() %>\n\n run_validation:\n on-success: send_message\n on-error: set_status_failed\n action: tripleo.validations.run_validation validation=<% $.validation_name %> plan=<% $.plan %>\n publish:\n status: SUCCESS\n stdout: <% task().result.stdout %>\n stderr: <% task().result.stderr %>\n\n set_status_failed:\n on-complete: send_message\n publish:\n status: FAILED\n stdout: <% task(run_validation).result.stdout %>\n stderr: <% task(run_validation).result.stderr %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validation\n payload:\n validation_name: <% $.validation_name %>\n plan: <% $.plan %>\n status: <% $.get('status', 'SUCCESS') %>\n stdout: <% $.stdout %>\n stderr: <% $.stderr %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.validations.v1.run_validation", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:35", "namespace": "", "updated_at": null, "scope": "private", "input": "validation_name, plan=overcloud, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "b328b02e-d1b8-4d92-94e6-3204a00af592"}, {"definition": "copy_ssh_key:\n input:\n # FIXME: we should stop using heat-admin as e.g. split-stack\n # environments (where Nova didn't create overcloud nodes) don't\n # have it present\n - overcloud_admin: heat-admin\n - queue_name: tripleo\n tags:\n - tripleo-common-managed\n tasks:\n get_servers:\n action: nova.servers_list\n on-success: get_pubkey\n publish:\n servers: <% task().result._info %>\n\n get_pubkey:\n action: tripleo.validations.get_pubkey\n on-success: deploy_ssh_key\n publish:\n pubkey: <% task().result %>\n\n deploy_ssh_key:\n workflow: tripleo.deployment.v1.deploy_on_server\n with-items: server in <% $.servers %>\n input:\n server_name: <% $.server.name %>\n server_uuid: <% $.server.id %>\n config: |\n #!/bin/bash\n if ! grep \"<% $.pubkey %>\" /home/<% $.overcloud_admin %>/.ssh/authorized_keys; then\n echo \"<% $.pubkey %>\" >> /home/<% $.overcloud_admin %>/.ssh/authorized_keys\n fi\n config_name: copy_ssh_key\n group: script\n queue_name: <% $.queue_name %>\n", "name": "tripleo.validations.v1.copy_ssh_key", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:35", "namespace": "", "updated_at": null, "scope": "private", "input": "overcloud_admin=heat-admin, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "b942b9b8-16f3-45e7-80a6-2a43b7aeddbc"}, {"definition": "add_validation_ssh_key_parameter:\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n test_validations_enabled:\n action: tripleo.validations.enabled\n on-success: get_pubkey\n on-error: unset_validation_key_parameter\n\n get_pubkey:\n action: tripleo.validations.get_pubkey\n on-success: set_validation_key_parameter\n publish:\n pubkey: <% task().result %>\n\n set_validation_key_parameter:\n action: tripleo.parameters.update\n input:\n parameters:\n node_admin_extra_ssh_keys: <% $.pubkey %>\n container: <% $.container %>\n\n # NOTE(shadower): We need to clear keys from a previous deployment\n unset_validation_key_parameter:\n action: tripleo.parameters.update\n input:\n parameters:\n node_admin_extra_ssh_keys: \"\"\n container: <% $.container %>\n", "name": "tripleo.validations.v1.add_validation_ssh_key_parameter", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:35", "namespace": "", "updated_at": null, "scope": "private", "input": "container, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "e3c697db-e418-442b-bd1c-5fbd2ffe4b63"}, {"definition": "list:\n input:\n - group_names: []\n tags:\n - tripleo-common-managed\n tasks:\n find_validations:\n action: tripleo.validations.list_validations groups=<% $.group_names %>\n", "name": "tripleo.validations.v1.list", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:35", "namespace": "", "updated_at": null, "scope": "private", "input": "group_names=[]", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "f52bde5d-0c70-42bb-a6cd-045c4432cd0a"}, {"definition": "check_default_nodes_count:\n input:\n - stack_id: overcloud\n - parameters: {}\n - default_role_counts: {}\n - run_validations: true\n - queue_name: tripleo\n output:\n statistics: <% $.statistics %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_run_validations:\n on-complete:\n - get_hypervisor_statistics: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n get_hypervisor_statistics:\n action: nova.hypervisors_statistics\n on-success: get_stack\n publish:\n statistics: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n errors: []\n warnings: []\n statistics: null\n\n get_stack:\n action: heat.stacks_get\n input:\n stack_id: <% $.stack_id %>\n on-success: get_associated_nodes\n publish:\n stack: <% task().result %>\n on-error: get_associated_nodes\n publish-on-error:\n stack: null\n\n get_associated_nodes:\n action: ironic.node_list\n input:\n associated: true\n on-success: get_available_nodes\n publish:\n associated_nodes: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n errors: []\n warnings: []\n\n get_available_nodes:\n action: ironic.node_list\n input:\n provision_state: available\n associated: false\n maintenance: false\n on-success: check_nodes_count\n publish:\n available_nodes: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n errors: []\n warnings: []\n\n check_nodes_count:\n action: tripleo.validations.check_nodes_count\n input:\n statistics: <% $.statistics %>\n stack: <% $.stack %>\n associated_nodes: <% $.associated_nodes %>\n available_nodes: <% $.available_nodes %>\n parameters: <% $.parameters %>\n default_role_counts: <% $.default_role_counts %>\n on-success: send_message\n publish:\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n statistics: null\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.check_hypervisor_stats\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n statistics: <% $.statistics %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.validations.v1.check_default_nodes_count", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:35", "namespace": "", "updated_at": null, "scope": "private", "input": "stack_id=overcloud, parameters={}, default_role_counts={}, run_validations=True, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "fc145568-8eb2-4a1b-b474-0f056e62285b"}, {"definition": "get_host_cpus:\n workflow: tripleo.derive_params_formulas.v1.get_host_cpus\n input:\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n publish:\n host_cpus: <% task().result.get('host_cpus', '') %>\n on-success: get_sock_mem\n on-error: set_status_failed_get_host_cpus\n", "name": "tripleo.derive_params_formulas.v1.get_host_cpus", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:37", "namespace": "", "updated_at": null, "scope": "private", "input": "hw_data", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "256a4fa7-3b27-4dd7-b73b-c36dc0e3d689"}, {"definition": "dpdk_derive_params:\n description: >\n Workflow to derive parameters for DPDK service.\n input:\n - plan\n - role_name\n - hw_data # introspection data\n - user_inputs\n - derived_parameters: {}\n\n output:\n derived_parameters: <% $.derived_parameters.mergeWith($.get('dpdk_parameters', {})) %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_network_config:\n action: tripleo.parameters.get_network_config\n input:\n container: <% $.plan %>\n role_name: <% $.role_name %>\n publish:\n network_configs: <% task().result.get('network_config', []) %>\n on-success: get_dpdk_nics_numa_info\n on-error: set_status_failed_get_network_config\n\n get_dpdk_nics_numa_info:\n action: tripleo.derive_params.get_dpdk_nics_numa_info\n input:\n network_configs: <% $.network_configs %>\n inspect_data: <% $.hw_data %>\n publish:\n dpdk_nics_numa_info: <% task().result %>\n on-success:\n # TODO: Need to remove condtions here\n # adding condition and throw error in action for empty check\n - get_dpdk_nics_numa_nodes: <% $.dpdk_nics_numa_info %>\n - set_status_failed_get_dpdk_nics_numa_info: <% not $.dpdk_nics_numa_info %>\n on-error: set_status_failed_on_error_get_dpdk_nics_numa_info\n\n get_dpdk_nics_numa_nodes:\n publish:\n dpdk_nics_numa_nodes: <% $.dpdk_nics_numa_info.groupBy($.numa_node).select($[0]).orderBy($) %>\n on-success:\n - get_numa_nodes: <% $.dpdk_nics_numa_nodes %>\n - set_status_failed_get_dpdk_nics_numa_nodes: <% not $.dpdk_nics_numa_nodes %>\n\n get_numa_nodes:\n publish:\n numa_nodes: <% $.hw_data.numa_topology.ram.select($.numa_node).orderBy($) %>\n on-success:\n - get_num_phy_cores_per_numa_for_pmd: <% $.numa_nodes %>\n - set_status_failed_get_numa_nodes: <% not $.numa_nodes %>\n\n get_num_phy_cores_per_numa_for_pmd:\n publish:\n num_phy_cores_per_numa_node_for_pmd: <% $.user_inputs.get('num_phy_cores_per_numa_node_for_pmd', 0) %>\n on-success:\n - get_num_cores_per_numa_nodes: <% isInteger($.num_phy_cores_per_numa_node_for_pmd) and $.num_phy_cores_per_numa_node_for_pmd > 0 %>\n - set_status_failed_get_num_phy_cores_per_numa_for_pmd_invalid: <% not isInteger($.num_phy_cores_per_numa_node_for_pmd) %>\n - set_status_failed_get_num_phy_cores_per_numa_for_pmd_not_provided: <% $.num_phy_cores_per_numa_node_for_pmd = 0 %>\n\n # For NUMA node with DPDK nic, number of cores should be used from user input\n # For NUMA node without DPDK nic, number of cores should be 1\n get_num_cores_per_numa_nodes:\n publish:\n num_cores_per_numa_nodes: <% let(dpdk_nics_nodes => $.dpdk_nics_numa_nodes, cores => $.num_phy_cores_per_numa_node_for_pmd) -> $.numa_nodes.select(switch($ in $dpdk_nics_nodes => $cores, not $ in $dpdk_nics_nodes => 1)) %>\n on-success: get_pmd_cpus\n\n get_pmd_cpus:\n action: tripleo.derive_params.get_dpdk_core_list\n input:\n inspect_data: <% $.hw_data %>\n numa_nodes_cores_count: <% $.num_cores_per_numa_nodes %>\n publish:\n pmd_cpus: <% task().result %>\n on-success:\n - get_pmd_cpus_range_list: <% $.pmd_cpus %>\n - set_status_failed_get_pmd_cpus: <% not $.pmd_cpus %>\n on-error: set_status_failed_on_error_get_pmd_cpus\n\n get_pmd_cpus_range_list:\n action: tripleo.derive_params.convert_number_to_range_list\n input:\n num_list: <% $.pmd_cpus %>\n publish:\n pmd_cpus: <% task().result %>\n on-success: get_host_cpus\n on-error: set_status_failed_get_pmd_cpus_range_list\n\n get_host_cpus:\n workflow: tripleo.derive_params_formulas.v1.get_host_cpus\n input:\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n publish:\n host_cpus: <% task().result.get('host_cpus', '') %>\n on-success: get_sock_mem\n on-error: set_status_failed_get_host_cpus\n\n get_sock_mem:\n action: tripleo.derive_params.get_dpdk_socket_memory\n input:\n dpdk_nics_numa_info: <% $.dpdk_nics_numa_info %>\n numa_nodes: <% $.numa_nodes %>\n overhead: <% $.user_inputs.get('overhead', 800) %>\n packet_size_in_buffer: <% 4096*64 %>\n publish:\n sock_mem: <% task().result %>\n on-success:\n - get_dpdk_parameters: <% $.sock_mem %>\n - set_status_failed_get_sock_mem: <% not $.sock_mem %>\n on-error: set_status_failed_on_error_get_sock_mem\n\n get_dpdk_parameters:\n publish:\n dpdk_parameters: <% dict(concat($.role_name, 'Parameters') => dict('OvsPmdCoreList' => $.get('pmd_cpus', ''), 'OvsDpdkCoreList' => $.get('host_cpus', ''), 'OvsDpdkSocketMemory' => $.get('sock_mem', ''))) %>\n\n set_status_failed_get_network_config:\n publish:\n status: FAILED\n message: <% task(get_network_config).result %>\n on-success: fail\n\n set_status_failed_get_dpdk_nics_numa_info:\n publish:\n status: FAILED\n message: \"Unable to determine DPDK NIC's NUMA information\"\n on-success: fail\n\n set_status_failed_on_error_get_dpdk_nics_numa_info:\n publish:\n status: FAILED\n message: <% task(get_dpdk_nics_numa_info).result %>\n on-success: fail\n\n set_status_failed_get_dpdk_nics_numa_nodes:\n publish:\n status: FAILED\n message: \"Unable to determine DPDK NIC's numa nodes\"\n on-success: fail\n\n set_status_failed_get_numa_nodes:\n publish:\n status: FAILED\n message: 'Unable to determine available NUMA nodes'\n on-success: fail\n\n set_status_failed_get_num_phy_cores_per_numa_for_pmd_invalid:\n publish:\n status: FAILED\n message: <% \"num_phy_cores_per_numa_node_for_pmd user input '{0}' is invalid\".format($.num_phy_cores_per_numa_node_for_pmd) %>\n on-success: fail\n\n set_status_failed_get_num_phy_cores_per_numa_for_pmd_not_provided:\n publish:\n status: FAILED\n message: 'num_phy_cores_per_numa_node_for_pmd user input is not provided'\n on-success: fail\n\n set_status_failed_get_pmd_cpus:\n publish:\n status: FAILED\n message: 'Unable to determine OvsPmdCoreList parameter'\n on-success: fail\n\n set_status_failed_on_error_get_pmd_cpus:\n publish:\n status: FAILED\n message: <% task(get_pmd_cpus).result %>\n on-success: fail\n\n set_status_failed_get_pmd_cpus_range_list:\n publish:\n status: FAILED\n message: <% task(get_pmd_cpus_range_list).result %>\n on-success: fail\n\n set_status_failed_get_host_cpus:\n publish:\n status: FAILED\n message: <% task(get_host_cpus).result.get('message', '') %>\n on-success: fail\n\n set_status_failed_get_sock_mem:\n publish:\n status: FAILED\n message: 'Unable to determine OvsDpdkSocketMemory parameter'\n on-success: fail\n\n set_status_failed_on_error_get_sock_mem:\n publish:\n status: FAILED\n message: <% task(get_sock_mem).result %>\n on-success: fail\n", "name": "tripleo.derive_params_formulas.v1.dpdk_derive_params", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:37", "namespace": "", "updated_at": null, "scope": "private", "input": "plan, role_name, hw_data, user_inputs, derived_parameters={}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "4baf8005-bd52-41f2-b9b4-95d807b339f0"}, {"definition": "hci_derive_params:\n description: Derive the deployment parameters for HCI\n input:\n - role_name\n - environment_parameters\n - heat_resource_tree\n - introspection_data\n - user_inputs\n - derived_parameters: {}\n\n output:\n derived_parameters: <% $.derived_parameters.mergeWith($.get('hci_parameters', {})) %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_hci_inputs:\n publish:\n hci_profile: <% $.user_inputs.get('hci_profile', '') %>\n hci_profile_config: <% $.user_inputs.get('hci_profile_config', {}) %>\n MB_PER_GB: 1024\n on-success:\n - get_average_guest_memory_size_in_mb: <% $.hci_profile and $.hci_profile_config.get($.hci_profile, {}) %>\n - set_failed_invalid_hci_profile: <% $.hci_profile and not $.hci_profile_config.get($.hci_profile, {}) %>\n # When no hci_profile is specified, the workflow terminates without deriving any HCI parameters.\n\n get_average_guest_memory_size_in_mb:\n publish:\n average_guest_memory_size_in_mb: <% $.hci_profile_config.get($.hci_profile, {}).get('average_guest_memory_size_in_mb', 0) %>\n on-success:\n - get_average_guest_cpu_utilization_percentage: <% isInteger($.average_guest_memory_size_in_mb) %>\n - set_failed_invalid_average_guest_memory_size_in_mb: <% not isInteger($.average_guest_memory_size_in_mb) %>\n\n get_average_guest_cpu_utilization_percentage:\n publish:\n average_guest_cpu_utilization_percentage: <% $.hci_profile_config.get($.hci_profile, {}).get('average_guest_cpu_utilization_percentage', 0) %>\n on-success:\n - get_gb_overhead_per_guest: <% isInteger($.average_guest_cpu_utilization_percentage) %>\n - set_failed_invalid_average_guest_cpu_utilization_percentage: <% not isInteger($.average_guest_cpu_utilization_percentage) %>\n\n get_gb_overhead_per_guest:\n publish:\n gb_overhead_per_guest: <% $.user_inputs.get('gb_overhead_per_guest', 0.5) %>\n on-success:\n - get_gb_per_osd: <% isNumber($.gb_overhead_per_guest) %>\n - set_failed_invalid_gb_overhead_per_guest: <% not isNumber($.gb_overhead_per_guest) %>\n\n get_gb_per_osd:\n publish:\n gb_per_osd: <% $.user_inputs.get('gb_per_osd', 3) %>\n on-success:\n - get_cores_per_osd: <% isNumber($.gb_per_osd) %>\n - set_failed_invalid_gb_per_osd: <% not isNumber($.gb_per_osd) %>\n\n get_cores_per_osd:\n publish:\n cores_per_osd: <% $.user_inputs.get('cores_per_osd', 1.0) %>\n on-success:\n - get_extra_configs: <% isNumber($.cores_per_osd) %>\n - set_failed_invalid_cores_per_osd: <% not isNumber($.cores_per_osd) %>\n\n get_extra_configs:\n publish:\n extra_config: <% $.environment_parameters.get('ExtraConfig', {}) %>\n role_extra_config: <% $.environment_parameters.get(concat($.role_name, 'ExtraConfig'), {}) %>\n role_env_params: <% $.environment_parameters.get(concat($.role_name, 'Parameters'), {}) %>\n role_derive_params: <% $.derived_parameters.get(concat($.role_name, 'Parameters'), {}) %>\n on-success: get_num_osds\n\n get_num_osds:\n publish:\n num_osds: <% $.heat_resource_tree.parameters.get('CephAnsibleDisksConfig', {}).get('default', {}).get('devices', []).count() %>\n on-success:\n - get_memory_mb: <% $.num_osds %>\n # If there's no CephAnsibleDisksConfig then look for OSD configuration in hiera data\n - get_num_osds_from_hiera: <% not $.num_osds %>\n\n get_num_osds_from_hiera:\n publish:\n num_osds: <% $.role_extra_config.get('ceph::profile::params::osds', $.extra_config.get('ceph::profile::params::osds', {})).keys().count() %>\n on-success:\n - get_memory_mb: <% $.num_osds %>\n - set_failed_no_osds: <% not $.num_osds %>\n\n get_memory_mb:\n publish:\n memory_mb: <% $.introspection_data.get('memory_mb', 0) %>\n on-success:\n - get_nova_vcpu_pin_set: <% $.memory_mb %>\n - set_failed_get_memory_mb: <% not $.memory_mb %>\n\n # Determine the number of CPU cores available to Nova and Ceph. If\n # NovaVcpuPinSet is defined then use the number of vCPUs in the set,\n # otherwise use all of the cores identified in the introspection data.\n\n get_nova_vcpu_pin_set:\n publish:\n # NovaVcpuPinSet can be defined in multiple locations, and it's\n # important to select the value in order of precedence:\n #\n # 1) User specified value for this role\n # 2) User specified default value for all roles\n # 3) Value derived by another derived parameters workflow\n nova_vcpu_pin_set: <% $.role_env_params.get('NovaVcpuPinSet', $.environment_parameters.get('NovaVcpuPinSet', $.role_derive_params.get('NovaVcpuPinSet', ''))) %>\n on-success:\n - get_nova_vcpu_count: <% $.nova_vcpu_pin_set %>\n - get_num_cores: <% not $.nova_vcpu_pin_set %>\n\n get_nova_vcpu_count:\n action: tripleo.derive_params.convert_range_to_number_list\n input:\n range_list: <% $.nova_vcpu_pin_set %>\n publish:\n num_cores: <% task().result.split(',').count() %>\n on-success: calculate_nova_parameters\n on-error: set_failed_get_nova_vcpu_count\n\n get_num_cores:\n publish:\n num_cores: <% $.introspection_data.get('cpus', 0) %>\n on-success:\n - calculate_nova_parameters: <% $.num_cores %>\n - set_failed_get_num_cores: <% not $.num_cores %>\n\n # HCI calculations are broken into multiple steps. This is necessary\n # because variables published by a Mistral task are not available\n # for use by that same task. Variables computed and published in a task\n # are only available in subsequent tasks.\n #\n # The HCI calculations compute two Nova parameters:\n # - reserved_host_memory\n # - cpu_allocation_ratio\n #\n # The reserved_host_memory calculation computes the amount of memory\n # that needs to be reserved for Ceph and the total amount of \"guest\n # overhead\" memory that is based on the anticipated number of guests.\n # Psuedo-code for the calculation (disregarding MB and GB units) is\n # as follows:\n #\n # ceph_memory = mem_per_osd * num_osds\n # nova_memory = total_memory - ceph_memory\n # num_guests = nova_memory /\n # (average_guest_memory_size + overhead_per_guest)\n # reserved_memory = ceph_memory + (num_guests * overhead_per_guest)\n #\n # The cpu_allocation_ratio calculation is similar in that it takes into\n # account the number of cores that must be reserved for Ceph.\n #\n # ceph_cores = cores_per_osd * num_osds\n # guest_cores = num_cores - ceph_cores\n # guest_vcpus = guest_cores / average_guest_utilization\n # cpu_allocation_ratio = guest_vcpus / num_cores\n\n calculate_nova_parameters:\n publish:\n avg_guest_util: <% $.average_guest_cpu_utilization_percentage / 100.0 %>\n avg_guest_size_gb: <% $.average_guest_memory_size_in_mb / float($.MB_PER_GB) %>\n memory_gb: <% $.memory_mb / float($.MB_PER_GB) %>\n ceph_mem_gb: <% $.gb_per_osd * $.num_osds %>\n nonceph_cores: <% $.num_cores - int($.cores_per_osd * $.num_osds) %>\n on-success: calc_step_2\n\n calc_step_2:\n publish:\n num_guests: <% int(($.memory_gb - $.ceph_mem_gb) / ($.avg_guest_size_gb + $.gb_overhead_per_guest)) %>\n guest_vcpus: <% $.nonceph_cores / $.avg_guest_util %>\n on-success: calc_step_3\n\n calc_step_3:\n publish:\n reserved_host_memory: <% $.MB_PER_GB * int($.ceph_mem_gb + ($.num_guests * $.gb_overhead_per_guest)) %>\n cpu_allocation_ratio: <% $.guest_vcpus / $.num_cores %>\n on-success: validate_results\n\n validate_results:\n publish:\n # Verify whether HCI is viable:\n # - At least 80% of the memory is reserved for Ceph and guest overhead\n # - At least half of the CPU cores must be available to Nova\n mem_ok: <% $.reserved_host_memory <= ($.memory_mb * 0.8) %>\n cpu_ok: <% $.cpu_allocation_ratio >= 0.5 %>\n on-success:\n - set_failed_insufficient_mem: <% not $.mem_ok %>\n - set_failed_insufficient_cpu: <% not $.cpu_ok %>\n - publish_hci_parameters: <% $.mem_ok and $.cpu_ok %>\n\n publish_hci_parameters:\n publish:\n # TODO(abishop): Update this when the cpu_allocation_ratio can be set\n # via a THT parameter (no such parameter currently exists). Until a\n # THT parameter exists, use hiera data to set the cpu_allocation_ratio.\n hci_parameters: <% dict(concat($.role_name, 'Parameters') => dict('NovaReservedHostMemory' => $.reserved_host_memory)) + dict(concat($.role_name, 'ExtraConfig') => dict('nova::cpu_allocation_ratio' => $.cpu_allocation_ratio)) %>\n\n set_failed_invalid_hci_profile:\n publish:\n message: \"'<% $.hci_profile %>' is not a valid HCI profile.\"\n on-success: fail\n\n set_failed_invalid_average_guest_memory_size_in_mb:\n publish:\n message: \"'<% $.average_guest_memory_size_in_mb %>' is not a valid average_guest_memory_size_in_mb value.\"\n on-success: fail\n\n set_failed_invalid_gb_overhead_per_guest:\n publish:\n message: \"'<% $.gb_overhead_per_guest %>' is not a valid gb_overhead_per_guest value.\"\n on-success: fail\n\n set_failed_invalid_gb_per_osd:\n publish:\n message: \"'<% $.gb_per_osd %>' is not a valid gb_per_osd value.\"\n on-success: fail\n\n set_failed_invalid_cores_per_osd:\n publish:\n message: \"'<% $.cores_per_osd %>' is not a valid cores_per_osd value.\"\n on-success: fail\n\n set_failed_invalid_average_guest_cpu_utilization_percentage:\n publish:\n message: \"'<% $.average_guest_cpu_utilization_percentage %>' is not a valid average_guest_cpu_utilization_percentage value.\"\n on-success: fail\n\n set_failed_no_osds:\n publish:\n message: \"No Ceph OSDs found in the overcloud definition ('ceph::profile::params::osds').\"\n on-success: fail\n\n set_failed_get_memory_mb:\n publish:\n message: \"Unable to determine the amount of physical memory (no 'memory_mb' found in introspection_data).\"\n on-success: fail\n\n set_failed_get_nova_vcpu_count:\n publish:\n message: <% task(get_nova_vcpu_count).result %>\n on-success: fail\n\n set_failed_get_num_cores:\n publish:\n message: \"Unable to determine the number of CPU cores (no 'cpus' found in introspection_data).\"\n on-success: fail\n\n set_failed_insufficient_mem:\n publish:\n message: \"<% $.memory_mb %> MB is not enough memory to run hyperconverged.\"\n on-success: fail\n\n set_failed_insufficient_cpu:\n publish:\n message: \"<% $.num_cores %> CPU cores are not enough to run hyperconverged.\"\n on-success: fail\n", "name": "tripleo.derive_params_formulas.v1.hci_derive_params", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:37", "namespace": "", "updated_at": null, "scope": "private", "input": "role_name, environment_parameters, heat_resource_tree, introspection_data, user_inputs, derived_parameters={}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "a914aeca-aefd-4b8f-82b7-016118f8fe66"}, {"definition": "sriov_derive_params:\n description: >\n This workflow derives parameters for the SRIOV feature.\n\n input:\n - role_name\n - hw_data # introspection data\n - derived_parameters: {}\n\n output:\n derived_parameters: <% $.derived_parameters.mergeWith($.get('sriov_parameters', {})) %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_host_cpus:\n workflow: tripleo.derive_params_formulas.v1.get_host_cpus\n input:\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n publish:\n host_cpus: <% task().result.get('host_cpus', '') %>\n on-success: get_sriov_parameters\n on-error: set_status_failed_get_host_cpus\n\n get_sriov_parameters:\n publish:\n # SriovHostCpusList parameter is added temporarily and it's removed later from derived parameters result.\n sriov_parameters: <% dict(concat($.role_name, 'Parameters') => dict('SriovHostCpusList' => $.get('host_cpus', ''))) %>\n\n set_status_failed_get_host_cpus:\n publish:\n status: FAILED\n message: <% task(get_host_cpus).result.get('message', '') %>\n on-success: fail\n", "name": "tripleo.derive_params_formulas.v1.sriov_derive_params", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:37", "namespace": "", "updated_at": null, "scope": "private", "input": "role_name, hw_data, derived_parameters={}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "d247bbb8-afaa-4fdd-a791-804194a99614"}, {"definition": "host_derive_params:\n description: >\n This workflow derives parameters for the Host process, and is mainly associated with CPU pinning and huge memory pages.\n This workflow can be dependent on any feature or also can be invoked individually as well.\n\n input:\n - role_name\n - hw_data # introspection data\n - user_inputs\n - derived_parameters: {}\n\n output:\n derived_parameters: <% $.derived_parameters.mergeWith($.get('host_parameters', {})) %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_cpus:\n publish:\n cpus: <% $.hw_data.numa_topology.cpus %>\n on-success:\n - get_role_derive_params: <% $.cpus %>\n - set_status_failed_get_cpus: <% not $.cpus %>\n\n get_role_derive_params:\n publish:\n role_derive_params: <% $.derived_parameters.get(concat($.role_name, 'Parameters'), {}) %>\n # removing the role parameters (eg. ComputeParameters) in derived_parameters dictionary since already copied in role_derive_params.\n derived_parameters: <% $.derived_parameters.delete(concat($.role_name, 'Parameters')) %>\n on-success: get_host_cpus\n\n get_host_cpus:\n publish:\n host_cpus: <% $.role_derive_params.get('OvsDpdkCoreList', '') or $.role_derive_params.get('SriovHostCpusList', '') %>\n # SriovHostCpusList parameter is added temporarily for host_cpus and not needed in derived_parameters result.\n # SriovHostCpusList parameter is deleted in derived_parameters list and adding the updated role parameters\n # back in the derived_parameters.\n derived_parameters: <% $.derived_parameters + dict(concat($.role_name, 'Parameters') => $.role_derive_params.delete('SriovHostCpusList')) %>\n on-success: get_host_dpdk_combined_cpus\n\n get_host_dpdk_combined_cpus:\n publish:\n host_dpdk_combined_cpus: <% let(pmd_cpus => $.role_derive_params.get('OvsPmdCoreList', '')) -> switch($pmd_cpus => concat($pmd_cpus, ',', $.host_cpus), not $pmd_cpus => $.host_cpus) %>\n reserved_cpus: []\n on-success:\n - get_host_dpdk_combined_cpus_num_list: <% $.host_dpdk_combined_cpus %>\n - set_status_failed_get_host_dpdk_combined_cpus: <% not $.host_dpdk_combined_cpus %>\n\n get_host_dpdk_combined_cpus_num_list:\n action: tripleo.derive_params.convert_range_to_number_list\n input:\n range_list: <% $.host_dpdk_combined_cpus %>\n publish:\n host_dpdk_combined_cpus: <% task().result %>\n reserved_cpus: <% task().result.split(',') %>\n on-success: get_nova_cpus\n on-error: set_status_failed_get_host_dpdk_combined_cpus_num_list\n\n get_nova_cpus:\n publish:\n nova_cpus: <% let(reserved_cpus => $.reserved_cpus) -> $.cpus.select($.thread_siblings).flatten().where(not (str($) in $reserved_cpus)).join(',') %>\n on-success:\n - get_isol_cpus: <% $.nova_cpus %>\n - set_status_failed_get_nova_cpus: <% not $.nova_cpus %>\n\n # concatinates OvsPmdCoreList range format and NovaVcpuPinSet in range format. it may not be in perfect range format.\n # example: concatinates '12-15,19' and 16-18' ranges '12-15,19,16-18'\n get_isol_cpus:\n publish:\n isol_cpus: <% let(pmd_cpus => $.role_derive_params.get('OvsPmdCoreList','')) -> switch($pmd_cpus => concat($pmd_cpus, ',', $.nova_cpus), not $pmd_cpus => $.nova_cpus) %>\n on-success: get_isol_cpus_num_list\n\n # Gets the isol_cpus in the number list\n # example: '12-15,19,16-18' into '12,13,14,15,16,17,18,19'\n get_isol_cpus_num_list:\n action: tripleo.derive_params.convert_range_to_number_list\n input:\n range_list: <% $.isol_cpus %>\n publish:\n isol_cpus: <% task().result %>\n on-success: get_nova_cpus_range_list\n on-error: set_status_failed_get_isol_cpus_num_list\n\n get_nova_cpus_range_list:\n action: tripleo.derive_params.convert_number_to_range_list\n input:\n num_list: <% $.nova_cpus %>\n publish:\n nova_cpus: <% task().result %>\n on-success: get_isol_cpus_range_list\n on-error: set_status_failed_get_nova_cpus_range_list\n\n # converts number format isol_cpus into range format\n # example: '12,13,14,15,16,17,18,19' into '12-19'\n get_isol_cpus_range_list:\n action: tripleo.derive_params.convert_number_to_range_list\n input:\n num_list: <% $.isol_cpus %>\n publish:\n isol_cpus: <% task().result %>\n on-success: get_host_mem\n on-error: set_status_failed_get_isol_cpus_range_list\n\n get_host_mem:\n publish:\n host_mem: <% $.user_inputs.get('host_mem_default', 4096) %>\n on-success: check_default_hugepage_supported\n\n check_default_hugepage_supported:\n publish:\n default_hugepage_supported: <% $.hw_data.get('inventory', {}).get('cpu', {}).get('flags', []).contains('pdpe1gb') %>\n on-success:\n - get_total_memory: <% $.default_hugepage_supported %>\n - set_status_failed_check_default_hugepage_supported: <% not $.default_hugepage_supported %>\n\n get_total_memory:\n publish:\n total_memory: <% $.hw_data.get('inventory', {}).get('memory', {}).get('physical_mb', 0) %>\n on-success:\n - get_hugepage_allocation_percentage: <% $.total_memory %>\n - set_status_failed_get_total_memory: <% not $.total_memory %>\n\n get_hugepage_allocation_percentage:\n publish:\n huge_page_allocation_percentage: <% $.user_inputs.get('huge_page_allocation_percentage', 0) %>\n on-success:\n - get_hugepages: <% isInteger($.huge_page_allocation_percentage) and $.huge_page_allocation_percentage > 0 %>\n - set_status_failed_get_hugepage_allocation_percentage_invalid: <% not isInteger($.huge_page_allocation_percentage) %>\n - set_status_failed_get_hugepage_allocation_percentage_not_provided: <% $.huge_page_allocation_percentage = 0 %>\n\n get_hugepages:\n publish:\n hugepages: <% let(huge_page_perc => float($.huge_page_allocation_percentage)/100)-> int((($.total_memory/1024)-4) * $huge_page_perc) %>\n on-success:\n - get_cpu_model: <% $.hugepages %>\n - set_status_failed_get_hugepages: <% not $.hugepages %>\n\n get_cpu_model:\n publish:\n intel_cpu_model: <% $.hw_data.get('inventory', {}).get('cpu', {}).get('model_name', '').startsWith('Intel') %>\n on-success: get_iommu_info\n\n get_iommu_info:\n publish:\n iommu_info: <% switch($.intel_cpu_model => 'intel_iommu=on iommu=pt', not $.intel_cpu_model => '') %>\n on-success: get_kernel_args\n\n get_kernel_args:\n publish:\n kernel_args: <% concat('default_hugepagesz=1GB hugepagesz=1G ', 'hugepages=', str($.hugepages), ' ', $.iommu_info, ' isolcpus=', $.isol_cpus) %>\n on-success: get_host_parameters\n\n get_host_parameters:\n publish:\n host_parameters: <% dict(concat($.role_name, 'Parameters') => dict('NovaVcpuPinSet' => $.get('nova_cpus', ''), 'NovaReservedHostMemory' => $.get('host_mem', ''), 'KernelArgs' => $.get('kernel_args', ''), 'IsolCpusList' => $.get('isol_cpus', ''))) %>\n\n set_status_failed_get_cpus:\n publish:\n status: FAILED\n message: \"Unable to determine CPU's on NUMA nodes\"\n on-success: fail\n\n set_status_failed_get_host_dpdk_combined_cpus:\n publish:\n status: FAILED\n message: 'Unable to combine host and dpdk cpus list'\n on-success: fail\n\n set_status_failed_get_host_dpdk_combined_cpus_num_list:\n publish:\n status: FAILED\n message: <% task(get_host_dpdk_combined_cpus_num_list).result %>\n on-success: fail\n\n set_status_failed_get_nova_cpus:\n publish:\n status: FAILED\n message: 'Unable to determine nova vcpu pin set'\n on-success: fail\n\n set_status_failed_get_nova_cpus_range_list:\n publish:\n status: FAILED\n message: <% task(get_nova_cpus_range_list).result %>\n on-success: fail\n\n set_status_failed_get_isol_cpus_num_list:\n publish:\n status: FAILED\n message: <% task(get_isol_cpus_num_list).result %>\n on-success: fail\n\n set_status_failed_get_isol_cpus_range_list:\n publish:\n status: FAILED\n message: <% task(get_isol_cpus_range_list).result %>\n on-success: fail\n\n set_status_failed_check_default_hugepage_supported:\n publish:\n status: FAILED\n message: 'default huge page size 1GB is not supported'\n on-success: fail\n\n set_status_failed_get_total_memory:\n publish:\n status: FAILED\n message: 'Unable to determine total memory'\n on-success: fail\n\n set_status_failed_get_hugepage_allocation_percentage_invalid:\n publish:\n status: FAILED\n message: <% \"huge_page_allocation_percentage user input '{0}' is invalid\".format($.huge_page_allocation_percentage) %>\n on-success: fail\n\n set_status_failed_get_hugepage_allocation_percentage_not_provided:\n publish:\n status: FAILED\n message: 'huge_page_allocation_percentage user input is not provided'\n on-success: fail\n\n set_status_failed_get_hugepages:\n publish:\n status: FAILED\n message: 'Unable to determine huge pages'\n on-success: fail\n", "name": "tripleo.derive_params_formulas.v1.host_derive_params", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:37", "namespace": "", "updated_at": null, "scope": "private", "input": "role_name, hw_data, user_inputs, derived_parameters={}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "fa7da8d7-8e80-4cfa-95c1-dbd632eabe3f"}, {"definition": "validate_networks:\n with-items: network in <% $.network_names_lower.concat($.network_names) %>\n action: swift.head_object\n input:\n container: <% $.container %>\n obj: network/<% $.network.toLower() %>.yaml\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n", "name": "tripleo.plan_management.v1.validate_networks", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "container=overcloud, network_data_file=network_data.yaml, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "05f80678-9449-4c8b-bf6e-e333bea65bdd"}, {"definition": "update_deployment_plan:\n input:\n - container\n - source_url: null\n - queue_name: tripleo\n - generate_passwords: true\n - plan_environment: null\n tags:\n - tripleo-common-managed\n tasks:\n templates_source_check:\n on-success:\n - update_plan: <% $.source_url = null %>\n - clone_git_repo: <% $.source_url != null %>\n\n clone_git_repo:\n action: tripleo.git.clone container=<% $.container %> url=<% $.source_url %>\n on-success: upload_templates_directory\n on-error: clone_git_repo_set_status_failed\n\n upload_templates_directory:\n action: tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>\n on-success: create_swift_rings_backup_plan\n on-complete: cleanup_temporary_files\n on-error: upload_templates_directory_set_status_failed\n\n cleanup_temporary_files:\n action: tripleo.git.clean container=<% $.container %>\n\n create_swift_rings_backup_plan:\n workflow: tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan\n on-success: update_plan\n on-error: create_swift_rings_backup_plan_set_status_failed\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n use_default_templates: true\n\n update_plan:\n on-success:\n - ensure_passwords_exist: <% $.generate_passwords = true %>\n - container_images_prepare: <% $.generate_passwords != true %>\n\n ensure_passwords_exist:\n action: tripleo.parameters.generate_passwords container=<% $.container %>\n on-success: container_images_prepare\n on-error: ensure_passwords_exist_set_status_failed\n\n container_images_prepare:\n description: >\n Populate all container image parameters with default values.\n action: tripleo.container_images.prepare container=<% $.container %>\n on-success: process_templates\n on-error: container_images_prepare_set_status_failed\n\n process_templates:\n action: tripleo.templates.process container=<% $.container %>\n on-success:\n - set_status_success: <% $.plan_environment = null %>\n - upload_plan_environment: <% $.plan_environment != null %>\n on-error: process_templates_set_status_failed\n\n upload_plan_environment:\n action: tripleo.templates.upload_plan_environment container=<% $.container %> plan_environment=<% $.plan_environment %>\n on-success: set_status_success\n on-error: process_templates_set_status_failed\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: 'Plan updated.'\n\n create_swift_rings_backup_plan_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_swift_rings_backup_plan).result %>\n\n clone_git_repo_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(clone_git_repo).result %>\n\n upload_templates_directory_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(upload_templates_directory).result %>\n\n process_templates_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(process_templates).result %>\n\n ensure_passwords_exist_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(ensure_passwords_exist).result %>\n\n container_images_prepare_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(container_images_prepare).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.update_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1.update_deployment_plan", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "container, source_url=None, queue_name=tripleo, generate_passwords=True, plan_environment=None", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "064821f9-e70c-42c2-a59d-a84f38bfa7ff"}, {"definition": "_validate_networks_from_roles:\n description: Internal workflow for validating a network exists from a role\n\n input:\n - container: overcloud\n - defined_networks\n - networks_in_roles\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n validate_network_in_network_data:\n publish:\n networks_found: <% $.networks_in_roles.toSet().intersect($.defined_networks.toSet()) %>\n networks_not_found: <% $.networks_in_roles.toSet().difference($.defined_networks.toSet()) %>\n on-success:\n - network_not_found: <% $.networks_not_found %>\n - notify_zaqar: <% not $.networks_not_found %>\n\n network_not_found:\n publish:\n message: <% \"Some networks in roles are not defined, {0}\".format($.networks_not_found.join(', ')) %>\n status: FAILED\n on-success: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1._validate_networks_from_role\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1._validate_networks_from_roles", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "container=overcloud, defined_networks, networks_in_roles, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "0d64c4dd-b43b-4671-b6f9-011a3e5cdb44"}, {"definition": "list_available_roles:\n input:\n - container: overcloud\n - queue_name: tripleo\n\n output:\n available_roles: <% $.available_roles %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_role_file_names:\n action: swift.get_container\n input:\n container: <% $.container %>\n publish:\n role_names: <% task().result[1].where($.name.startsWith('roles/')).where($.name.endsWith('.yaml')).name %>\n on-success: get_role_files\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_role_files:\n with-items: role_name in <% $.role_names %>\n action: swift.get_object\n on-success: transform_output\n on-error: notify_zaqar\n input:\n container: <% $.container %>\n obj: <% $.role_name %>\n publish:\n status: SUCCESS\n available_yaml_roles: <% task().result.select($[1]) %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n transform_output:\n publish:\n status: SUCCESS\n available_roles: <% yaml_parse($.available_yaml_roles.join(\"\\n\")) %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-complete: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.list_available_roles\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n available_roles: <% $.get('available_roles', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1.list_available_roles", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "container=overcloud, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "297e8fdd-c389-4a45-8a3e-7f414a01556b"}, {"definition": "list_roles:\n description: Retrieve the roles_data.yaml and return a usable object\n\n input:\n - container: overcloud\n - roles_data_file: 'roles_data.yaml'\n - queue_name: tripleo\n\n output:\n roles_data: <% $.roles_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_roles_data:\n action: swift.get_object\n input:\n container: <% $.container %>\n obj: <% $.roles_data_file %>\n publish:\n roles_data: <% yaml_parse(task().result.last()) %>\n status: SUCCESS\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.list_roles\n payload:\n status: <% $.status %>\n roles_data: <% $.get('roles_data', {}) %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1.list_roles", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "container=overcloud, roles_data_file=roles_data.yaml, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "3d77f5fe-50e5-4da6-9d42-7ccb8b404675"}, {"definition": "get_deprecated_parameters:\n description: Gets the list of deprecated parameters in the whole of the plan including nested stack\n input:\n - container: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_flatten_data:\n action: tripleo.parameters.get_flatten container=<% $.container %>\n on-success: get_deprecated_params\n on-error: set_status_failed_get_flatten_data\n publish:\n user_params: <% task().result.environment_parameters %>\n plan_params: <% task().result.heat_resource_tree.parameters.keys() %>\n parameter_groups: <% task().result.heat_resource_tree.resources.values().where( $.get('parameter_groups') ).select($.parameter_groups).flatten() %>\n\n get_deprecated_params:\n on-success: check_if_user_param_has_deprecated\n publish:\n deprecated_params: <% $.parameter_groups.where($.get('label') = 'deprecated').select($.parameters).flatten().distinct() %>\n\n check_if_user_param_has_deprecated:\n on-success: get_unused_params\n publish:\n deprecated_result: <% let(up => $.user_params) -> $.deprecated_params.select( dict('parameter' => $, 'deprecated' => true, 'user_defined' => $up.keys().contains($)) ) %>\n\n # Get the list of parameters, which are defined by user via environment files's parameter_default, but not part of the plan definition\n # It may be possible that the parameter will be used by a service, but the service is not part of the plan.\n # In such cases, the parameter will be reported as unused, care should be take to understand whether it is really unused or not.\n get_unused_params:\n on-success: send_message\n publish:\n unused_params: <% let(plan_params => $.plan_params) -> $.user_params.keys().where( not $plan_params.contains($) ) %>\n\n set_status_failed_get_flatten_data:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_flatten_data).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.get_deprecated_parameters\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n deprecated: <% $.get('deprecated_result', []) %>\n unused: <% $.get('unused_params', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1.get_deprecated_parameters", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "container=overcloud, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "54070409-26be-4826-8452-ac7cd4ca6e94"}, {"definition": "validate_roles:\n description: Vaildate roles data exists and is parsable\n\n input:\n - container: overcloud\n - roles_data_file: 'roles_data.yaml'\n - queue_name: tripleo\n\n output:\n roles_data: <% $.roles_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_roles_data:\n workflow: list_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n roles_data: <% task().result.roles_data %>\n status: SUCCESS\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error:\n notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.validate_networks\n payload:\n status: <% $.status %>\n roles_data: <% $.get('roles_data', '') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1.validate_roles", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "container=overcloud, roles_data_file=roles_data.yaml, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "5cb59948-cb4f-4997-b858-e1c48037187d"}, {"definition": "download_logs:\n description: Creates a tarball with logging data\n input:\n - queue_name: tripleo\n - logging_container: \"tripleo-ui-logs\"\n - downloads_container: \"tripleo-ui-logs-downloads\"\n - delete_after: 3600\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n publish_logs:\n workflow: tripleo.plan_management.v1.publish_ui_logs_to_swift\n on-success: prepare_log_download\n on-error: publish_logs_set_status_failed\n\n prepare_log_download:\n action: tripleo.logging_to_swift.prepare_log_download\n input:\n logging_container: <% $.logging_container %>\n downloads_container: <% $.downloads_container %>\n delete_after: <% $.delete_after %>\n on-success: create_tempurl\n on-error: download_logs_set_status_failed\n publish:\n filename: <% task().result %>\n\n create_tempurl:\n action: tripleo.swift.tempurl\n on-success: set_status_success\n on-error: create_tempurl_set_status_failed\n input:\n container: <% $.downloads_container %>\n obj: <% $.filename %>\n valid: 3600\n publish:\n tempurl: <% task().result %>\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(create_tempurl).result %>\n tempurl: <% task(create_tempurl).result %>\n\n publish_logs_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(publish_logs).result %>\n\n download_logs_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(prepare_log_download).result %>\n\n create_tempurl_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_tempurl).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.download_logs\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n tempurl: <% $.get('tempurl', '') %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1.download_logs", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "queue_name=tripleo, logging_container=tripleo-ui-logs, downloads_container=tripleo-ui-logs-downloads, delete_after=3600", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "62d6044d-5b30-4158-af0a-51c3877aa239"}, {"definition": "update_roles:\n description: >\n takes data in json format validates its contents and persists them in\n roles_data.yaml, after successful update, templates are regenerated.\n input:\n - container\n - roles\n - roles_data_file: 'roles_data.yaml'\n - replace_all: false\n - queue_name: tripleo\n tags:\n - tripleo-common-managed\n tasks:\n get_available_roles:\n workflow: list_available_roles\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name%>\n publish:\n available_roles: <% task().result.available_roles %>\n on-success: validate_input\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n validate_input:\n description: >\n validate the format of input (verify that each role in input has the\n required attributes set. check README in roles directory in t-h-t),\n validate that roles in input exist in roles directory in t-h-t\n action: tripleo.plan.validate_roles\n input:\n container: <% $.container %>\n roles: <% $.roles %>\n available_roles: <% $.available_roles %>\n on-success: get_network_data\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_network_data:\n workflow: list_networks\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().result.network_data %>\n on-success: validate_network_names\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n validate_network_names:\n description: >\n validate that Network names assigned to Role exist in\n network-data.yaml object in Swift container\n workflow: _validate_networks_from_roles\n input:\n container: <% $.container %>\n defined_networks: <% $.network_data.name %>\n networks_in_roles: <% $.roles.networks.flatten().distinct() %>\n queue_name: <% $.queue_name %>\n on-success: get_current_roles\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result.message %>\n\n get_current_roles:\n workflow: list_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n current_roles: <% task().result.roles_data %>\n on-success: update_roles_data\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n update_roles_data:\n description: >\n update roles_data.yaml object in Swift with roles from workflow input\n action: tripleo.plan.update_roles\n input:\n container: <% $.container %>\n roles: <% $.roles %>\n current_roles: <% $.current_roles %>\n replace_all: <% $.replace_all %>\n publish:\n updated_roles_data: <% task().result.roles %>\n on-success: update_roles_data_in_swift\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n update_roles_data_in_swift:\n description: >\n update roles_data.yaml object in Swift with data from workflow input\n action: swift.put_object\n input:\n container: <% $.container %>\n obj: <% $.roles_data_file %>\n contents: <% yaml_dump($.updated_roles_data) %>\n on-success: regenerate_templates\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n regenerate_templates:\n action: tripleo.templates.process container=<% $.container %>\n on-success: get_updated_roles\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_updated_roles:\n workflow: list_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n publish:\n updated_roles: <% task().result.roles_data %>\n status: SUCCESS\n on-complete: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.roles.v1.update_roles\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n updated_roles: <% $.get('updated_roles', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1.update_roles", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "container, roles, roles_data_file=roles_data.yaml, replace_all=False, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "74b947dd-5268-4db1-9e90-f63992742769"}, {"definition": "delete_deployment_plan:\n description: >\n Deletes a plan by deleting the container matching plan_name. It will\n not delete the plan if a stack exists with the same name.\n\n tags:\n - tripleo-common-managed\n\n input:\n - container: overcloud\n - queue_name: tripleo\n\n tasks:\n delete_plan:\n action: tripleo.plan.delete container=<% $.container %>\n on-complete: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n publish:\n status: SUCCESS\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.delete_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1.delete_deployment_plan", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "container=overcloud, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "94f321f3-b24d-400c-a3dd-05b2e127b49e"}, {"definition": "validate_network_files:\n description: Validate network files exist\n input:\n - container: overcloud\n - network_data\n - queue_name: tripleo\n\n output:\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_network_names:\n publish:\n network_names_lower: <% $.network_data.where($.containsKey('name_lower')).name_lower %>\n network_names: <% $.network_data.where(not $.containsKey('name_lower')).name %>\n on-success: validate_networks\n\n validate_networks:\n with-items: network in <% $.network_names_lower.concat($.network_names) %>\n action: swift.head_object\n input:\n container: <% $.container %>\n obj: network/<% $.network.toLower() %>.yaml\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.validate_network_files\n payload:\n status: <% $.status %>\n message: <% $.message %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1.validate_network_files", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "container=overcloud, network_data, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "a09787eb-091d-4920-8e96-9905fd231fb1"}, {"definition": "list_networks:\n input:\n - container: 'overcloud'\n - network_data_file: 'network_data.yaml'\n - queue_name: tripleo\n\n output:\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_networks:\n action: swift.get_object\n input:\n container: <% $.container %>\n obj: <% $.network_data_file %>\n on-success: notify_zaqar\n publish:\n network_data: <% yaml_parse(task().result.last()) %>\n status: SUCCESS\n message: <% task().result %>\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.list_networks\n payload:\n status: <% $.status %>\n network_data: <% $.get('network_data', {}) %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1.list_networks", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "container=overcloud, network_data_file=network_data.yaml, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "a0b53a54-9bd8-4ed7-b119-df8c67abad27"}, {"definition": "list_available_networks:\n input:\n - container\n - queue_name: tripleo\n\n output:\n available_networks: <% $.available_networks %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_network_file_names:\n action: swift.get_container\n input:\n container: <% $.container %>\n publish:\n network_names: <% task().result[1].where($.name.startsWith('networks/')).where($.name.endsWith('.yaml')).name %>\n on-success: get_network_files\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_network_files:\n with-items: network_name in <% $.network_names %>\n action: swift.get_object\n on-success: transform_output\n on-error: notify_zaqar\n input:\n container: <% $.container %>\n obj: <% $.network_name %>\n publish:\n status: SUCCESS\n available_yaml_networks: <% task().result.select($[1]) %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n transform_output:\n publish:\n status: SUCCESS\n available_networks: <% yaml_parse($.available_yaml_networks.join(\"\\n\")) %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-complete: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.list_available_networks\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n available_networks: <% $.get('available_networks', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1.list_available_networks", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "container, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "b484619a-3b2d-4619-a948-9978d967ceb8"}, {"definition": "export_deployment_plan:\n description: Creates an export tarball for a given plan\n input:\n - plan\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n export_plan:\n action: tripleo.plan.export\n input:\n plan: <% $.plan %>\n delete_after: 3600\n exports_container: \"plan-exports\"\n on-success: create_tempurl\n on-error: export_plan_set_status_failed\n\n create_tempurl:\n action: tripleo.swift.tempurl\n on-success: set_status_success\n on-error: create_tempurl_set_status_failed\n input:\n container: \"plan-exports\"\n obj: \"<% $.plan %>.tar.gz\"\n valid: 3600\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(create_tempurl).result %>\n tempurl: <% task(create_tempurl).result %>\n\n export_plan_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(export_plan).result %>\n\n create_tempurl_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_tempurl).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.export_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n tempurl: <% $.get('tempurl', '') %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1.export_deployment_plan", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "plan, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "c9812d43-cb7a-4a23-99a0-d15a6f767132"}, {"definition": "validate_roles_and_networks:\n description: Vaidate that roles and network data are valid\n\n input:\n - container: overcloud\n - roles_data_file: 'roles_data.yaml'\n - network_data_file: 'network_data.yaml'\n - queue_name: tripleo\n\n output:\n roles_data: <% $.roles_data %>\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n validate_network_data:\n workflow: validate_networks\n input:\n container: <% $.container %>\n network_data_file: <% $.network_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().result.network_data %>\n on-success: validate_roles_data\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n validate_roles_data:\n workflow: validate_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n roles_data: <% task().result.roles_data %>\n role_networks_data: <% task().result.roles_data.networks %>\n networks_in_roles: <% task().result.roles_data.networks.flatten().distinct() %>\n on-success: validate_roles_and_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n validate_roles_and_networks:\n workflow: _validate_networks_from_roles\n input:\n container: <% $.container %>\n defined_networks: <% $.network_data.name %>\n networks_in_roles: <% $.networks_in_roles %>\n queue_name: <% $.queue_name %>\n publish:\n status: SUCCESS\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result.message %>\n on-error: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.validate_roles_and_networks\n payload:\n status: <% $.status %>\n roles_data: <% $.get('roles_data', {}) %>\n network_data: <% $.get('network_data', {}) %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1.validate_roles_and_networks", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "container=overcloud, roles_data_file=roles_data.yaml, network_data_file=network_data.yaml, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "cda93a75-8b60-4059-b48c-70e9456ece4e"}, {"definition": "get_passwords:\n description: Retrieves passwords for a given plan\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n verify_container_exists:\n action: swift.head_container container=<% $.container %>\n on-success: get_environment_passwords\n on-error: verify_container_set_status_failed\n\n get_environment_passwords:\n action: tripleo.parameters.get_passwords container=<% $.container %>\n on-success: get_passwords_set_status_success\n on-error: get_passwords_set_status_failed\n\n get_passwords_set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(get_environment_passwords).result %>\n\n get_passwords_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(get_environment_passwords).result %>\n\n verify_container_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(verify_container_exists).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.get_passwords\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1.get_passwords", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "container, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "d93f4fc0-be33-4588-b16b-092c91509f45"}, {"definition": "select_roles:\n description: >\n takes a list of role names as input and populates roles_data.yaml in\n container in Swift with respective roles from 'roles directory'\n input:\n - container\n - role_names\n - roles_data_file: 'roles_data.yaml'\n - replace_all: true\n - queue_name: tripleo\n tags:\n - tripleo-common-managed\n tasks:\n\n get_available_roles:\n workflow: list_available_roles\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n publish:\n available_roles: <% task().result.available_roles %>\n on-success: get_current_roles\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_current_roles:\n workflow: list_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n current_roles: <% task().result.roles_data %>\n on-success: gather_roles\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n gather_roles:\n description: >\n for each role name from the input, check if it exists in\n roles_data.yaml, if yes, use that role definition, if not, get the\n role definition from roles directory. Use the gathered roles\n definitions as input to updateRolesWorkflow - this ensures\n configuration of the roles which are already in roles_data.yaml\n will not get overridden by data from roles directory\n action: tripleo.plan.gather_roles\n input:\n role_names: <% $.role_names %>\n current_roles: <% $.current_roles %>\n available_roles: <% $.available_roles %>\n publish:\n gathered_roles: <% task().result.gathered_roles %>\n on-success: call_update_roles_workflow\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n call_update_roles_workflow:\n workflow: update_roles\n input:\n container: <% $.container %>\n roles: <% $.gathered_roles %>\n roles_data_file: <% $.roles_data_file %>\n replace_all: <% $.replace_all %>\n queue_name: <% $.queue_name %>\n on-complete: notify_zaqar\n publish:\n selected_roles: <% task().result.updated_roles %>\n status: SUCCESS\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.select_roles\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n selected_roles: <% $.get('selected_roles', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1.select_roles", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "container, role_names, roles_data_file=roles_data.yaml, replace_all=True, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "e2eee195-7f79-4058-8fe5-a7294d496162"}, {"definition": "create_deployment_plan:\n description: >\n This workflow provides the capability to create a deployment plan using\n the default heat templates provided in a standard TripleO undercloud\n deployment, heat templates contained in an external git repository, or a\n swift container that already contains templates.\n input:\n - container\n - source_url: null\n - queue_name: tripleo\n - generate_passwords: true\n - use_default_templates: false\n\n tags:\n - tripleo-common-managed\n\n tasks:\n container_required_check:\n description: >\n If using the default templates or importing templates from a git\n repository, a new container needs to be created. If using an existing\n container containing templates, skip straight to create_plan.\n on-success:\n - verify_container_doesnt_exist: <% $.use_default_templates or $.source_url %>\n - create_plan: <% $.use_default_templates = false and $.source_url = null %>\n\n verify_container_doesnt_exist:\n action: swift.head_container container=<% $.container %>\n on-success: notify_zaqar\n on-error: create_container\n publish:\n status: FAILED\n message: \"Unable to create plan. The Swift container already exists\"\n\n create_container:\n action: tripleo.plan.create_container container=<% $.container %>\n on-success: templates_source_check\n on-error: create_container_set_status_failed\n\n cleanup_temporary_files:\n action: tripleo.git.clean container=<% $.container %>\n\n templates_source_check:\n on-success:\n - upload_default_templates: <% $.use_default_templates = true %>\n - clone_git_repo: <% $.source_url != null %>\n\n clone_git_repo:\n action: tripleo.git.clone container=<% $.container %> url=<% $.source_url %>\n on-success: upload_templates_directory\n on-error: clone_git_repo_set_status_failed\n\n upload_templates_directory:\n action: tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>\n on-success: create_plan\n on-complete: cleanup_temporary_files\n on-error: upload_templates_directory_set_status_failed\n\n upload_default_templates:\n action: tripleo.templates.upload container=<% $.container %>\n on-success: create_plan\n on-error: upload_to_container_set_status_failed\n\n create_plan:\n on-success:\n - ensure_passwords_exist: <% $.generate_passwords = true %>\n - add_root_stack_name: <% $.generate_passwords != true %>\n\n ensure_passwords_exist:\n action: tripleo.parameters.generate_passwords container=<% $.container %>\n on-success: add_root_stack_name\n on-error: ensure_passwords_exist_set_status_failed\n\n add_root_stack_name:\n action: tripleo.parameters.update\n input:\n container: <% $.container %>\n parameters:\n RootStackName: <% $.container %>\n on-success: container_images_prepare\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n container_images_prepare:\n description: >\n Populate all container image parameters with default values.\n action: tripleo.container_images.prepare container=<% $.container %>\n on-success: process_templates\n on-error: container_images_prepare_set_status_failed\n\n process_templates:\n action: tripleo.templates.process container=<% $.container %>\n on-success: set_status_success\n on-error: process_templates_set_status_failed\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: 'Plan created.'\n\n create_container_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_container).result %>\n\n clone_git_repo_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(clone_git_repo).result %>\n\n upload_templates_directory_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(upload_templates_directory).result %>\n\n upload_to_container_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(upload_default_templates).result %>\n\n ensure_passwords_exist_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(ensure_passwords_exist).result %>\n\n process_templates_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(process_templates).result %>\n\n container_images_prepare_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(container_images_prepare).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.create_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1.create_deployment_plan", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "container, source_url=None, queue_name=tripleo, generate_passwords=True, use_default_templates=False", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "f4a9f0ce-b6ab-4ced-be9e-528e0c527e31"}, {"definition": "create_default_deployment_plan:\n description: >\n This workflow exists to maintain backwards compatibility in pike. This\n workflow will likely be removed in queens in favor of create_deployment_plan.\n input:\n - container\n - queue_name: tripleo\n - generate_passwords: true\n tags:\n - tripleo-common-managed\n tasks:\n call_create_deployment_plan:\n workflow: tripleo.plan_management.v1.create_deployment_plan\n on-success: set_status_success\n on-error: call_create_deployment_plan_set_status_failed\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n generate_passwords: <% $.generate_passwords %>\n use_default_templates: true\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(call_create_deployment_plan).result %>\n\n call_create_deployment_plan_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(call_create_deployment_plan).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.create_default_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1.create_default_deployment_plan", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "container, queue_name=tripleo, generate_passwords=True", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "fe5532c0-150b-426a-b0c5-aa0684d20343"}, {"definition": "publish_ui_logs_to_swift:\n description: >\n This workflow drains a zaqar queue, and publish its messages into a log\n file in swift. This workflow is called by cron trigger.\n\n input:\n - logging_queue_name: tripleo-ui-logging\n - logging_container: tripleo-ui-logs\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n # We're using a NoOp action to start the workflow. The recursive nature\n # of the workflow means that Mistral will refuse to execute it because it\n # doesn't know where to begin.\n start:\n on-success: get_messages\n\n get_messages:\n action: zaqar.claim_messages\n on-success:\n - format_messages: <% task().result.len() > 0 %>\n input:\n queue_name: <% $.logging_queue_name %>\n ttl: 60\n grace: 60\n publish:\n status: SUCCESS\n messages: <% task().result %>\n message_ids: <% task().result.select($._id) %>\n\n format_messages:\n action: tripleo.logging_to_swift.format_messages\n on-success: upload_to_swift\n input:\n messages: <% $.messages %>\n publish:\n status: SUCCESS\n formatted_messages: <% task().result %>\n\n upload_to_swift:\n action: tripleo.logging_to_swift.publish_ui_log_to_swift\n on-success: delete_messages\n input:\n logging_data: <% $.formatted_messages %>\n logging_container: <% $.logging_container %>\n publish:\n status: SUCCESS\n\n delete_messages:\n action: zaqar.delete_messages\n on-success: get_messages\n input:\n queue_name: <% $.logging_queue_name %>\n messages: <% $.message_ids %>\n publish:\n status: SUCCESS\n", "name": "tripleo.plan_management.v1.publish_ui_logs_to_swift", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:39", "namespace": "", "updated_at": null, "scope": "private", "input": "logging_queue_name=tripleo-ui-logging, logging_container=tripleo-ui-logs", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "ff72390a-a3af-42a5-a71e-8236f110d995"}, {"definition": "upload_logs:\n description: >\n This workflow uploads the sosreport files stored in the provide sos_dir\n on the provided host (server_uuid) to a swift container on the undercloud\n input:\n - server_uuid\n - server_name\n - container\n - sos_dir: /var/tmp/tripleo-sos\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n # actions\n get_swift_information:\n action: tripleo.swift.swift_information\n on-success: do_log_upload\n on-error: set_get_swift_information_failed\n input:\n container: <% $.container %>\n publish:\n container_url: <% task().result.container_url %>\n auth_key: <% task().result.auth_key %>\n\n set_get_swift_information_failed:\n on-complete:\n - send_message\n publish:\n status: FAILED\n message: <% task(get_swift_information).result %>\n\n do_log_upload:\n action: tripleo.deployment.config\n on-success: send_message\n on-error: set_do_log_upload_failed\n input:\n server_id: <% $.server_uuid %>\n name: \"upload_logs\"\n config: |\n #!/bin/bash\n CONTAINER_URL=\"<% $.container_url %>\"\n TOKEN=\"<% $.auth_key %>\"\n SOS_DIR=\"<% $.sos_dir %>\"\n for FILE in $(find $SOS_DIR -type f); do\n FILENAME=$(basename $FILE)\n curl -X PUT -i -H \"X-Auth-Token: $TOKEN\" -T $FILE $CONTAINER_URL/$FILENAME\n if [ $? -eq 0 ]; then\n rm -f $FILE\n fi\n done\n group: \"script\"\n publish:\n message: \"Uploaded logs from <% $.server_name %>\"\n\n set_do_log_upload_failed:\n on-complete:\n - send_message\n publish:\n status: FAILED\n message: <% tag(do_log_upload).result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.upload_logs') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n", "name": "tripleo.support.v1.upload_logs", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:40", "namespace": "", "updated_at": null, "scope": "private", "input": "server_uuid, server_name, container, sos_dir=/var/tmp/tripleo-sos, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "5b22580a-bf92-4025-89a6-55f90fed0741"}, {"definition": "fetch_logs:\n description: >\n This workflow creates a container on the undercloud, executes the log\n collection on the servers whose names match the provided server_name, and\n executes the log upload process on all the servers to the container on\n the undercloud.\n input:\n - server_name\n - container\n - concurrency: 5\n - timeout: 1800\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n # actions\n create_container:\n workflow: tripleo.support.v1.create_container\n on-success: get_servers_matching\n on-error: set_create_container_failed\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n\n set_create_container_failed:\n on-complete: send_message\n publish:\n type: tripleo.support.v1.fetch_logs.create_container\n status: FAILED\n message: <% task(create_container).result %>\n\n get_servers_matching:\n action: nova.servers_list\n on-success: collect_logs_on_servers\n publish:\n servers_with_name: <% task().result._info.where($.name.indexOf(execution().input.server_name) > -1) %>\n\n collect_logs_on_servers:\n workflow: tripleo.support.v1.collect_logs\n timeout: <% $.timeout %>\n on-success: upload_logs_on_servers\n on-error: set_collect_logs_on_servers_failed\n input:\n server_name: <% $.server_name %>\n queue_name: <% $.queue_name %>\n\n set_collect_logs_on_servers_failed:\n on-complete: send_message\n publish:\n type: tripleo.support.v1.fetch_logs.collect_logs_on_servers\n status: FAILED\n message: <% task(collect_logs_on_servers).result %>\n\n upload_logs_on_servers:\n on-success: send_message\n on-error: set_upload_logs_on_servers_failed\n with-items: server in <% $.servers_with_name %>\n concurrency: <% $.concurrency %>\n workflow: tripleo.support.v1.upload_logs\n input:\n server_name: <% $.server.name %>\n server_uuid: <% $.server.id %>\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n\n set_upload_logs_on_servers_failed:\n on-complete: send_message\n publish:\n type: tripleo.support.v1.fetch_logs.upload_logs\n status: FAILED\n message: <% task(upload_logs_on_servers).result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.fetch_logs') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n", "name": "tripleo.support.v1.fetch_logs", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:40", "namespace": "", "updated_at": null, "scope": "private", "input": "server_name, container, concurrency=5, timeout=1800, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "5ffb54d1-397c-411c-a1b4-d8924b9649cf"}, {"definition": "collect_logs:\n description: >\n This workflow runs sosreport on the servers where their names match the\n provided server_name input. The logs are stored in the provided sos_dir.\n input:\n - server_name\n - sos_dir: /var/tmp/tripleo-sos\n - sos_options: boot,cluster,hardware,kernel,memory,nfs,openstack,packagemanager,performance,services,storage,system,webserver,virt\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n collect_logs_on_servers:\n workflow: tripleo.deployment.v1.deploy_on_servers\n on-success: send_message\n on-error: set_collect_logs_on_servers_failed\n input:\n server_name: <% $.server_name %>\n config_name: 'run_sosreport'\n config: |\n #!/bin/bash\n mkdir -p <% $.sos_dir %>\n sosreport --batch \\\n -p <% $.sos_options %> \\\n --tmp-dir <% $.sos_dir %>\n\n set_collect_logs_on_servers_failed:\n on-complete:\n - send_message\n publish:\n type: tripleo.deployment.v1.fetch_logs\n status: FAILED\n message: <% task().result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.collect_logs') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n", "name": "tripleo.support.v1.collect_logs", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:40", "namespace": "", "updated_at": null, "scope": "private", "input": "server_name, sos_dir=/var/tmp/tripleo-sos, sos_options=boot,cluster,hardware,kernel,memory,nfs,openstack,packagemanager,performance,services,storage,system,webserver,virt, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "8f9c61d9-8a6d-4b64-bd4b-78763f2050ce"}, {"definition": "create_container:\n description: >\n This work flow is used to check if the container exists and creates it\n if it does not exist.\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_container:\n action: swift.head_container container=<% $.container %>\n on-success: send_message\n on-error: create_container\n\n create_container:\n action: swift.put_container\n input:\n container: <% $.container %>\n headers:\n x-container-meta-usage-tripleo: support\n on-success: send_message\n on-error: set_create_container_failed\n\n set_create_container_failed:\n on-complete:\n - send_message\n publish:\n type: tripleo.support.v1.create_container.create_container\n status: FAILED\n message: <% task(create_container).result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.create_container') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n", "name": "tripleo.support.v1.create_container", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:40", "namespace": "", "updated_at": null, "scope": "private", "input": "container, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "c7adbf74-813b-4388-ae47-0d320cebcfdb"}, {"definition": "delete_container:\n description: >\n This workflow deletes all the objects in a provided swift container and\n then removes the container itself from the undercloud.\n input:\n - container\n - concurrency: 5\n - timeout: 900\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n # actions\n check_container:\n action: swift.head_container container=<% $.container %>\n on-success: list_objects\n on-error: set_check_container_failure\n\n set_check_container_failure:\n on-complete: send_message\n publish:\n status: FAILED\n type: tripleo.support.v1.delete_container.check_container\n message: <% task(check_container).result %>\n\n list_objects:\n action: swift.get_container container=<% $.container %>\n on-success: delete_objects\n on-error: set_list_objects_failure\n publish:\n log_objects: <% task().result[1] %>\n\n set_list_objects_failure:\n on-complete: send_message\n publish:\n status: FAILED\n type: tripleo.support.v1.delete_container.list_objects\n message: <% task(list_objects).result %>\n\n delete_objects:\n action: swift.delete_object\n concurrency: <% $.concurrency %>\n timeout: <% $.timeout %>\n with-items: object in <% $.log_objects %>\n input:\n container: <% $.container %>\n obj: <% $.object.name %>\n on-success: remove_container\n on-error: set_delete_objects_failure\n\n set_delete_objects_failure:\n on-complete: send_message\n publish:\n status: FAILED\n type: tripleo.support.v1.delete_container.delete_objects\n message: <% task(delete_objects).result %>\n\n remove_container:\n action: swift.delete_container container=<% $.container %>\n on-success: send_message\n on-error: set_remove_container_failure\n\n set_remove_container_failure:\n on-complete: send_message\n publish:\n status: FAILED\n type: tripleo.support.v1.delete_container.remove_container\n message: <% task(remove_container).result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n wait-before: 5\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.delete_container') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n", "name": "tripleo.support.v1.delete_container", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:40", "namespace": "", "updated_at": null, "scope": "private", "input": "container, concurrency=5, timeout=900, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "dd12a43f-c5fa-40c3-88ef-1521266f4c6d"}, {"definition": "config_download_deploy:\n\n description: >\n Configure the overcloud with config-download.\n\n input:\n - timeout: 240\n - queue_name: tripleo\n - plan_name: overcloud\n - work_dir: /var/lib/mistral\n - verbosity: 1\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_config:\n action: tripleo.config.get_overcloud_config\n input:\n container: <% $.get('plan_name') %>\n on-success: download_config\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n download_config:\n action: tripleo.config.download_config\n input:\n work_dir: <% $.get('work_dir') %>/<% execution().id %>\n on-success: send_msg_config_download\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n send_msg_config_download:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.config_download\n payload:\n status: <% $.get('status', 'RUNNING') %>\n message: Config downloaded at <% $.get('work_dir') %>/<% execution().id %>\n execution: <% execution() %>\n on-success: get_private_key\n\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: generate_inventory\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n generate_inventory:\n action: tripleo.ansible-generate-inventory\n input:\n ansible_ssh_user: tripleo-admin\n work_dir: <% $.get('work_dir') %>/<% execution().id %>\n plan_name: <% $.get('plan_name') %>\n publish:\n inventory: <% task().result %>\n on-success: send_msg_generate_inventory\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n send_msg_generate_inventory:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.config_download\n payload:\n status: <% $.get('status', 'RUNNING') %>\n message: Inventory generated at <% $.get('inventory') %>\n execution: <% execution() %>\n on-success: send_msg_run_ansible\n\n send_msg_run_ansible:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.config_download\n payload:\n status: <% $.get('status', 'RUNNING') %>\n message: >\n Running ansible playbook at <% $.get('work_dir') %>/<% execution().id %>/deploy_steps_playbook.yaml.\n See log file at <% $.get('work_dir') %>/<% execution().id %>/ansible.log for progress.\n ...\n execution: <% execution() %>\n on-success: run_ansible\n\n run_ansible:\n action: tripleo.ansible-playbook\n input:\n inventory: <% $.inventory %>\n playbook: <% $.get('work_dir') %>/<% execution().id %>/deploy_steps_playbook.yaml\n remote_user: tripleo-admin\n ssh_extra_args: '-o StrictHostKeyChecking=no'\n ssh_private_key: <% $.private_key %>\n use_openstack_credentials: true\n verbosity: <% $.get('verbosity') %>\n become: true\n timeout: <% $.timeout %>\n work_dir: <% $.get('work_dir') %>/<% execution().id %>\n queue_name: <% $.queue_name %>\n reproduce_command: true\n trash_output: true\n publish:\n log_path: <% task(run_ansible).result.get('log_path') %>\n on-success:\n - ansible_passed: <% task().result.returncode = 0 %>\n - ansible_failed: <% task().result.returncode != 0 %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: Ansible failed, check log at <% $.get('work_dir') %>/<% execution().id %>/ansible.log.\n\n ansible_passed:\n on-success: send_message\n publish:\n status: SUCCESS\n message: Ansible passed.\n\n ansible_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: Ansible failed, check log at <% $.get('work_dir') %>/<% execution().id %>/ansible.log.\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.config_download\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.deployment.v1.config_download_deploy", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:41", "namespace": "", "updated_at": null, "scope": "private", "input": "timeout=240, queue_name=tripleo, plan_name=overcloud, work_dir=/var/lib/mistral, verbosity=1", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "108e4b8d-cd34-47d8-8681-fdcb9cbdf00b"}, {"definition": "get_horizon_url:\n\n description: >\n Retrieve the Horizon URL from the Overcloud stack.\n\n input:\n - stack: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n output:\n horizon_url: <% $.horizon_url %>\n\n tasks:\n get_horizon_url:\n action: heat.stacks_get\n input:\n stack_id: <% $.stack %>\n publish:\n horizon_url: <% task().result.outputs.where($.output_key = \"EndpointMap\").output_value.HorizonPublic.uri.single() %>\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.get_horizon_url\n payload:\n horizon_url: <% $.get('horizon_url', '') %>\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.deployment.v1.get_horizon_url", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:41", "namespace": "", "updated_at": null, "scope": "private", "input": "stack=overcloud, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "330b28cd-2c5e-48a2-a32e-386a319bff28"}, {"definition": "deploy_on_server:\n\n input:\n - server_uuid\n - server_name\n - config\n - config_name\n - group\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n deploy_config:\n action: tripleo.deployment.config\n on-complete: send_message\n input:\n server_id: <% $.server_uuid %>\n name: <% $.config_name %>\n config: <% $.config %>\n group: <% $.group %>\n publish:\n stdout: <% task().result.deploy_stdout %>\n stderr: <% task().result.deploy_stderr %>\n status_code: <% task().result.deploy_status_code %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.deploy_on_server\n payload:\n status: <% $.get(\"status\", \"SUCCESS\") %>\n message: <% $.get(\"message\", \"\") %>\n server_uuid: <% $.server_uuid %>\n server_name: <% $.server_name %>\n config_name: <% $.config_name %>\n status_code: <% $.get(\"status_code\", \"\") %>\n stdout: <% $.get(\"stdout\", \"\") %>\n stderr: <% $.get(\"stderr\", \"\") %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.deployment.v1.deploy_on_server", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:41", "namespace": "", "updated_at": null, "scope": "private", "input": "server_uuid, server_name, config, config_name, group, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "332e66f4-e646-4578-ba83-80a339eaf265"}, {"definition": "deploy_on_servers:\n\n input:\n - server_name\n - config_name\n - config\n - group: script\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n check_if_all_servers:\n on-success:\n - get_servers_matching: <% $.server_name != \"all\" %>\n - get_all_servers: <% $.server_name = \"all\" %>\n\n get_servers_matching:\n action: nova.servers_list\n on-success: deploy_on_servers\n publish:\n servers_with_name: <% task().result._info.where($.name.indexOf(execution().input.server_name) > -1) %>\n\n get_all_servers:\n action: nova.servers_list\n on-success: deploy_on_servers\n publish:\n servers_with_name: <% task().result._info %>\n\n deploy_on_servers:\n on-success: send_success_message\n on-error: send_failed_message\n with-items: server in <% $.servers_with_name %>\n workflow: tripleo.deployment.v1.deploy_on_server\n input:\n server_name: <% $.server.name %>\n server_uuid: <% $.server.id %>\n config: <% $.config %>\n config_name: <% $.config_name %>\n group: <% $.group %>\n queue_name: <% $.queue_name %>\n\n send_success_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.deploy_on_servers\n payload:\n status: SUCCESS\n execution: <% execution() %>\n\n send_failed_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.deploy_on_servers\n payload:\n status: FAILED\n message: <% task(deploy_on_servers).result %>\n execution: <% execution() %>\n on-success: fail\n", "name": "tripleo.deployment.v1.deploy_on_servers", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:41", "namespace": "", "updated_at": null, "scope": "private", "input": "server_name, config_name, config, group=script, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "8ad50659-b6c8-4803-85c5-ab02e6e92daf"}, {"definition": "deploy_plan:\n\n description: >\n Deploy the overcloud for a plan.\n\n input:\n - container\n - run_validations: False\n - timeout: 240\n - skip_deploy_identifier: False\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n add_validation_ssh_key:\n workflow: tripleo.validations.v1.add_validation_ssh_key_parameter\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n on-complete:\n - run_validations: <% $.run_validations %>\n - create_swift_rings_backup_plan: <% not $.run_validations %>\n\n run_validations:\n workflow: tripleo.validations.v1.run_groups\n input:\n group_names:\n - 'pre-deployment'\n plan: <% $.container %>\n queue_name: <% $.queue_name %>\n on-success: create_swift_rings_backup_plan\n on-error: set_validations_failed\n\n set_validations_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(run_validations).result %>\n\n create_swift_rings_backup_plan:\n workflow: tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan\n on-success: cell_v2_discover_hosts\n on-error: create_swift_rings_backup_plan_set_status_failed\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n use_default_templates: true\n\n cell_v2_discover_hosts:\n on-success: deploy\n on-error: cell_v2_discover_hosts_failed\n action: tripleo.baremetal.cell_v2_discover_hosts\n\n cell_v2_discover_hosts_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(cell_v2_discover_hosts).result %>\n\n deploy:\n action: tripleo.deployment.deploy\n input:\n timeout: <% $.timeout %>\n container: <% $.container %>\n skip_deploy_identifier: <% $.skip_deploy_identifier %>\n on-success: send_message\n on-error: set_deployment_failed\n\n create_swift_rings_backup_plan_set_status_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(create_swift_rings_backup_plan).result %>\n\n set_deployment_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(deploy).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.deploy_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.deployment.v1.deploy_plan", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:41", "namespace": "", "updated_at": null, "scope": "private", "input": "container, run_validations=False, timeout=240, skip_deploy_identifier=False, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "92f5b4f3-dee6-4e8c-9204-cd26bc159015"}, {"definition": "set_node_state:\n input:\n - node_uuid\n - state_action\n - target_state\n - error_states:\n # The default includes all failure states, even unused by TripleO.\n - 'error'\n - 'adopt failed'\n - 'clean failed'\n - 'deploy failed'\n - 'inspect failed'\n - 'rescue failed'\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_provision_state:\n on-success: wait_for_provision_state\n on-error: set_provision_state_failed\n action: ironic.node_set_provision_state node_uuid=<% $.node_uuid %> state=<% $.state_action %>\n\n set_provision_state_failed:\n publish:\n message: <% task(set_provision_state).result %>\n on-complete: fail\n\n wait_for_provision_state:\n action: ironic.node_get\n input:\n node_id: <% $.node_uuid %>\n fields: ['provision_state', 'last_error']\n timeout: 1200 #20 minutes\n retry:\n delay: 3\n count: 400\n continue-on: <% not task().result.provision_state in [$.target_state] + $.error_states %>\n on-complete:\n - state_not_reached: <% task().result.provision_state != $.target_state %>\n\n state_not_reached:\n publish:\n message: >-\n Node <% $.node_uuid %> did not reach state \"<% $.target_state %>\",\n the state is \"<% task(wait_for_provision_state).result.provision_state %>\",\n error: <% task(wait_for_provision_state).result.last_error %>\n on-complete: fail\n\n output-on-error:\n result: <% $.message %>\n", "name": "tripleo.baremetal.v1.set_node_state", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "node_uuid, state_action, target_state, error_states=[u'error', u'adopt failed', u'clean failed', u'deploy failed', u'inspect failed', u'rescue failed']", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "010e5b41-be97-4e25-b97b-8fe71a998713"}, {"definition": "discover_and_enroll_nodes:\n description: Run nodes discovery over the given IP range and enroll nodes\n\n input:\n - ip_addresses\n - credentials\n - ports: [623]\n - kernel_name: null\n - ramdisk_name: null\n - instance_boot_option: local\n - initial_state: manageable\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n discover_nodes:\n workflow: tripleo.baremetal.v1.discover_nodes\n input:\n ip_addresses: <% $.ip_addresses %>\n ports: <% $.ports %>\n credentials: <% $.credentials %>\n queue_name: <% $.queue_name %>\n on-success: enroll_nodes\n on-error: discover_nodes_failed\n publish:\n nodes_json: <% task().result.nodes_json %>\n\n discover_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(discover_nodes).result %>\n\n enroll_nodes:\n workflow: tripleo.baremetal.v1.register_or_update\n input:\n nodes_json: <% $.nodes_json %>\n kernel_name: <% $.kernel_name %>\n ramdisk_name: <% $.ramdisk_name %>\n instance_boot_option: <% $.instance_boot_option %>\n initial_state: <% $.initial_state %>\n on-success: send_message\n on-error: enroll_nodes_failed\n publish:\n registered_nodes: <% task().result.registered_nodes %>\n\n enroll_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(enroll_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.discover_and_enroll_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n registered_nodes: <% $.get('registered_nodes', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1.discover_and_enroll_nodes", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "ip_addresses, credentials, ports=[623], kernel_name=None, ramdisk_name=None, instance_boot_option=local, initial_state=manageable, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "0bf435f2-dff5-4014-ba6d-058f9e08ceea"}, {"definition": "tag_node:\n description: Tag a node with a role\n input:\n - node_uuid\n - role: null\n - queue_name: tripleo\n\n task-defaults:\n on-error: send_message\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n update_node:\n on-success: send_message\n action: tripleo.baremetal.update_node_capability node_uuid=<% $.node_uuid %> capability='profile' value=<% $.role %>\n publish:\n message: <% task().result %>\n status: SUCCESS\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.tag_node\n payload:\n status: <% $.get('status', 'FAILED') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1.tag_node", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "node_uuid, role=None, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "0e607a07-95bb-4d40-8459-8b10063e3569"}, {"definition": "set_power_state:\n input:\n - node_uuid\n - state_action\n - target_state\n - error_state: 'error'\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_power_state:\n on-success: wait_for_power_state\n on-error: set_power_state_failed\n action: ironic.node_set_power_state node_id=<% $.node_uuid %> state=<% $.state_action %>\n\n set_power_state_failed:\n publish:\n message: <% task(set_power_state).result %>\n on-complete: fail\n\n wait_for_power_state:\n action: ironic.node_get\n input:\n node_id: <% $.node_uuid %>\n fields: ['power_state', 'last_error']\n timeout: 120 #2 minutes\n retry:\n delay: 6\n count: 20\n continue-on: <% not task().result.power_state in [$.target_state, $.error_state] %>\n on-complete:\n - state_not_reached: <% task().result.power_state != $.target_state %>\n\n state_not_reached:\n publish:\n message: >-\n Node <% $.node_uuid %> did not reach power state \"<% $.target_state %>\",\n the state is \"<% task(wait_for_power_state).result.power_state %>\",\n error: <% task(wait_for_power_state).result.last_error %>\n on-complete: fail\n\n output-on-error:\n result: <% $.message %>\n", "name": "tripleo.baremetal.v1.set_power_state", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "node_uuid, state_action, target_state, error_state=error", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "134d2bea-32be-4bec-9ee0-3a7acaec871c"}, {"definition": "nodes_with_profile:\n description: Find nodes with a specific profile\n input:\n - profile\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_active_nodes:\n action: ironic.node_list maintenance=false provision_state='active' detail=true\n on-success: get_available_nodes\n on-error: set_status_failed_get_active_nodes\n\n get_available_nodes:\n action: ironic.node_list maintenance=false provision_state='available' detail=true\n on-success: get_matching_nodes\n on-error: set_status_failed_get_available_nodes\n\n get_matching_nodes:\n with-items: node in <% task(get_available_nodes).result + task(get_active_nodes).result %>\n action: tripleo.baremetal.get_profile node=<% $.node %>\n on-success: send_message\n on-error: set_status_failed_get_matching_nodes\n publish:\n matching_nodes: <% let(input_profile_name => $.profile) -> task().result.where($.profile = $input_profile_name).uuid %>\n\n set_status_failed_get_active_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_active_nodes).result %>\n\n set_status_failed_get_available_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_available_nodes).result %>\n\n set_status_failed_get_matching_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_matching_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.nodes_with_profile\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n matching_nodes: <% $.matching_nodes or [] %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1.nodes_with_profile", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "profile, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "2e00eba8-7fae-447b-988d-b46ebed7522d"}, {"definition": "octavia_post_deploy:\n description: Octavia post deployment\n input:\n - amp_image_name\n - amp_image_filename\n - amp_image_tag\n - amp_ssh_key_name\n - amp_ssh_key_path\n - amp_ssh_key_data\n - auth_username\n - auth_password\n - auth_project_name\n - lb_mgmt_net_name\n - lb_mgmt_subnet_name\n - lb_sec_group_name\n - lb_mgmt_subnet_cidr\n - lb_mgmt_subnet_gateway\n - lb_mgmt_subnet_pool_start\n - lb_mgmt_subnet_pool_end\n - generate_certs\n - octavia_ansible_playbook\n - overcloud_admin\n - ca_cert_path\n - ca_private_key_path\n - ca_passphrase\n - client_cert_path\n - mgmt_port_dev\n - overcloud_password\n - overcloud_project\n - overcloud_pub_auth_uri\n - ansible_extra_env_variables:\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n ANSIBLE_SSH_RETRIES: '3'\n tags:\n - tripleo-common-managed\n tasks:\n get_overcloud_stack_details:\n publish:\n # TODO(beagles), we are making an assumption about the octavia heatlh manager and\n # controller worker needing\n #\n octavia_controller_ips: <% env().get('service_ips', {}).get('octavia_worker_ctlplane_node_ips', []) %>\n on-success: enable_ssh_admin\n\n enable_ssh_admin:\n workflow: tripleo.access.v1.enable_ssh_admin\n input:\n ssh_servers: <% $.octavia_controller_ips %>\n on-success: get_private_key\n\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: make_local_temp_directory\n\n make_local_temp_directory:\n action: tripleo.files.make_temp_dir\n publish:\n undercloud_local_dir: <% task().result.path %>\n on-success: make_remote_temp_directory\n\n make_remote_temp_directory:\n action: tripleo.files.make_temp_dir\n publish:\n undercloud_remote_dir: <% task().result.path %>\n on-success: build_local_connection_environment_vars\n\n build_local_connection_environment_vars:\n publish:\n ansible_local_connection_variables: <% dict('ANSIBLE_REMOTE_TEMP' => $.undercloud_remote_dir, 'ANSIBLE_LOCAL_TEMP' => $.undercloud_local_dir) + $.ansible_extra_env_variables %>\n on-success: upload_amphora\n\n upload_amphora:\n action: tripleo.ansible-playbook\n input:\n inventory:\n undercloud:\n hosts:\n localhost:\n ansible_connection: local\n\n playbook: <% $.octavia_ansible_playbook %>\n remote_user: stack\n extra_env_variables: <% $.ansible_local_connection_variables %>\n extra_vars:\n os_password: <% $.overcloud_password %>\n os_username: <% $.overcloud_admin %>\n os_project_name: <% $.overcloud_project %>\n os_auth_url: <% $.overcloud_pub_auth_uri %>\n os_auth_type: \"password\"\n os_identity_api_version: \"3\"\n amp_image_name: <% $.amp_image_name %>\n amp_image_filename: <% $.amp_image_filename %>\n amp_image_tag: <% $.amp_image_tag %>\n amp_ssh_key_name: <% $.amp_ssh_key_name %>\n amp_ssh_key_path: <% $.amp_ssh_key_path %>\n amp_ssh_key_data: <% $.amp_ssh_key_data %>\n auth_username: <% $.auth_username %>\n auth_password: <% $.auth_password %>\n auth_project_name: <% $.auth_project_name %>\n on-success: config_octavia\n\n config_octavia:\n action: tripleo.ansible-playbook\n input:\n inventory:\n octavia_nodes:\n hosts: <% $.octavia_controller_ips.toDict($, {}) %>\n verbosity: 0\n playbook: <% $.octavia_ansible_playbook %>\n remote_user: tripleo-admin\n become: true\n become_user: root\n ssh_private_key: <% $.private_key %>\n ssh_common_args: '-o StrictHostKeyChecking=no'\n ssh_extra_args: '-o UserKnownHostsFile=/dev/null'\n extra_env_variables: <% $.ansible_extra_env_variables %>\n extra_vars:\n os_password: <% $.overcloud_password %>\n os_username: <% $.overcloud_admin %>\n os_project_name: <% $.overcloud_project %>\n os_auth_url: <% $.overcloud_pub_auth_uri %>\n os_auth_type: \"password\"\n os_identity_api_version: \"3\"\n amp_image_tag: <% $.amp_image_tag %>\n lb_mgmt_net_name: <% $.lb_mgmt_net_name %>\n lb_mgmt_subnet_name: <% $.lb_mgmt_subnet_name %>\n lb_sec_group_name: <% $.lb_sec_group_name %>\n lb_mgmt_subnet_cidr: <% $.lb_mgmt_subnet_cidr %>\n lb_mgmt_subnet_gateway: <% $.lb_mgmt_subnet_gateway %>\n lb_mgmt_subnet_pool_start: <% $.lb_mgmt_subnet_pool_start %>\n lb_mgmt_subnet_pool_end: <% $.lb_mgmt_subnet_pool_end %>\n ca_cert_path: <% $.ca_cert_path %>\n ca_private_key_path: <% $.ca_private_key_path %>\n ca_passphrase: <% $.ca_passphrase %>\n client_cert_path: <% $.client_cert_path %>\n generate_certs: <% $.generate_certs %>\n mgmt_port_dev: <% $.mgmt_port_dev %>\n auth_project_name: <% $.auth_project_name %>\n on-complete: purge_local_temp_dir\n purge_local_temp_dir:\n action: tripleo.files.remove_temp_dir path=<% $.undercloud_local_dir %>\n on-complete: purge_remote_temp_dir\n purge_remote_temp_dir:\n action: tripleo.files.remove_temp_dir path=<% $.undercloud_remote_dir %>\n", "name": "tripleo.octavia_post.v1.octavia_post_deploy", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "amp_image_name, amp_image_filename, amp_image_tag, amp_ssh_key_name, amp_ssh_key_path, amp_ssh_key_data, auth_username, auth_password, auth_project_name, lb_mgmt_net_name, lb_mgmt_subnet_name, lb_sec_group_name, lb_mgmt_subnet_cidr, lb_mgmt_subnet_gateway, lb_mgmt_subnet_pool_start, lb_mgmt_subnet_pool_end, generate_certs, octavia_ansible_playbook, overcloud_admin, ca_cert_path, ca_private_key_path, ca_passphrase, client_cert_path, mgmt_port_dev, overcloud_password, overcloud_project, overcloud_pub_auth_uri, ansible_extra_env_variables={u'ANSIBLE_SSH_RETRIES': u'3', u'ANSIBLE_HOST_KEY_CHECKING': u'False'}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "3d7cbcf8-450d-400b-9ace-f7e897d8ccfd"}, {"definition": "ceph-install:\n # allows for additional extra_vars via workflow input\n input:\n - ansible_playbook_verbosity: 0\n - ansible_skip_tags: 'package-install,with_pkg'\n - ansible_env_variables: {}\n - ansible_extra_env_variables:\n ANSIBLE_CONFIG: /usr/share/ceph-ansible/ansible.cfg\n ANSIBLE_ACTION_PLUGINS: /usr/share/ceph-ansible/plugins/actions/\n ANSIBLE_ROLES_PATH: /usr/share/ceph-ansible/roles/\n ANSIBLE_RETRY_FILES_ENABLED: 'False'\n ANSIBLE_LOG_PATH: /var/log/mistral/ceph-install-workflow.log\n ANSIBLE_LIBRARY: /usr/share/ceph-ansible/library/\n ANSIBLE_SSH_RETRIES: '3'\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n DEFAULT_FORKS: '25'\n - ceph_ansible_extra_vars: {}\n - ceph_ansible_playbook: /usr/share/ceph-ansible/site-docker.yml.sample\n - node_data_lookup: '{}'\n tags:\n - tripleo-common-managed\n tasks:\n collect_puppet_hieradata:\n on-success: check_hieradata\n publish:\n hieradata: <% env().get('role_merged_configs', {}).values().select($.keys()).flatten().select(regex('^ceph::profile::params::osds$').search($)).where($ != null).toSet() %>\n check_hieradata:\n on-success:\n - set_blacklisted_ips: <% not bool($.hieradata) %>\n - fail(msg=<% 'Ceph deployment stopped, puppet-ceph hieradata found. Convert it into ceph-ansible variables. {0}'.format($.hieradata) %>): <% bool($.hieradata) %>\n set_blacklisted_ips:\n publish:\n blacklisted_ips: <% env().get('blacklisted_ip_addresses', []) %>\n on-success: set_ip_lists\n set_ip_lists:\n publish:\n mgr_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_mgr_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n mon_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_mon_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n osd_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_osd_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n mds_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_mds_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n rgw_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_rgw_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n nfs_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_nfs_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n rbdmirror_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_rbdmirror_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n client_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_client_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n on-success: merge_ip_lists\n merge_ip_lists:\n publish:\n ips_list: <% ($.mgr_ips + $.mon_ips + $.osd_ips + $.mds_ips + $.rgw_ips + $.nfs_ips + $.rbdmirror_ips + $.client_ips).toSet() %>\n on-success: enable_ssh_admin\n enable_ssh_admin:\n workflow: tripleo.access.v1.enable_ssh_admin\n input:\n ssh_servers: <% $.ips_list %>\n on-success: get_private_key\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: make_fetch_directory\n make_fetch_directory:\n action: tripleo.files.make_temp_dir\n publish:\n fetch_directory: <% task().result.path %>\n on-success: collect_nodes_uuid\n collect_nodes_uuid:\n action: tripleo.ansible-playbook\n input:\n inventory:\n overcloud:\n hosts: <% $.ips_list.toDict($, {}) %>\n remote_user: tripleo-admin\n become: true\n become_user: root\n verbosity: 0\n ssh_private_key: <% $.private_key %>\n #NOTE(gfidente): set ANSIBLE_CALLBACK_WHITELIST to empty string to avoid spurious output\n #in the json output. The publish: directive will in fact parse the output.\n extra_env_variables:\n ANSIBLE_CALLBACK_WHITELIST: ''\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n ANSIBLE_STDOUT_CALLBACK: 'json'\n playbook:\n - hosts: overcloud\n gather_facts: no\n tasks:\n - name: collect machine id\n command: dmidecode -s system-uuid\n publish:\n ansible_output: <% json_parse(task().result.stderr) %>\n on-success: set_ip_uuids\n set_ip_uuids:\n publish:\n ip_uuids: <% let(root => $.ansible_output.get('plays')[0].get('tasks')[0].get('hosts')) -> $.ips_list.toDict($, $root.get($).get('stdout')) %>\n on-success: parse_node_data_lookup\n parse_node_data_lookup:\n publish:\n json_node_data_lookup: <% json_parse($.node_data_lookup) %>\n on-success: map_node_data_lookup\n map_node_data_lookup:\n publish:\n ips_data: <% let(uuids => $.ip_uuids, root => $) -> $.ips_list.toDict($, $root.json_node_data_lookup.get($uuids.get($, \"NO-UUID-FOUND\"), {})) %>\n on-success: set_role_vars\n set_role_vars:\n publish:\n # NOTE(gfidente): collect role settings from all tht roles\n mgr_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_mgr_ansible_vars', {})).aggregate($1 + $2) %>\n mon_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_mon_ansible_vars', {})).aggregate($1 + $2) %>\n osd_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_osd_ansible_vars', {})).aggregate($1 + $2) %>\n mds_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_mds_ansible_vars', {})).aggregate($1 + $2) %>\n rgw_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_rgw_ansible_vars', {})).aggregate($1 + $2) %>\n nfs_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_nfs_ansible_vars', {})).aggregate($1 + $2) %>\n rbdmirror_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_rbdmirror_ansible_vars', {})).aggregate($1 + $2) %>\n client_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_client_ansible_vars', {})).aggregate($1 + $2) %>\n on-success: build_extra_vars\n build_extra_vars:\n publish:\n # NOTE(gfidente): merge vars from all ansible roles\n extra_vars: <% {'fetch_directory'=> $.fetch_directory} + $.mgr_vars + $.mon_vars + $.osd_vars + $.mds_vars + $.rgw_vars + $.nfs_vars + $.client_vars + $.rbdmirror_vars + $.ceph_ansible_extra_vars %>\n on-success: ceph_install\n ceph_install:\n with-items: playbook in <% list($.ceph_ansible_playbook).flatten() %>\n concurrency: 1\n action: tripleo.ansible-playbook\n input:\n inventory:\n mgrs:\n hosts: <% let(root => $) -> $.mgr_ips.toDict($, $root.ips_data.get($, {})) %>\n mons:\n hosts: <% let(root => $) -> $.mon_ips.toDict($, $root.ips_data.get($, {})) %>\n osds:\n hosts: <% let(root => $) -> $.osd_ips.toDict($, $root.ips_data.get($, {})) %>\n mdss:\n hosts: <% let(root => $) -> $.mds_ips.toDict($, $root.ips_data.get($, {})) %>\n rgws:\n hosts: <% let(root => $) -> $.rgw_ips.toDict($, $root.ips_data.get($, {})) %>\n nfss:\n hosts: <% let(root => $) -> $.nfs_ips.toDict($, $root.ips_data.get($, {})) %>\n rbdmirrors:\n hosts: <% let(root => $) -> $.rbdmirror_ips.toDict($, $root.ips_data.get($, {})) %>\n clients:\n hosts: <% let(root => $) -> $.client_ips.toDict($, $root.ips_data.get($, {})) %>\n all:\n vars: <% $.extra_vars %>\n playbook: <% $.playbook %>\n remote_user: tripleo-admin\n become: true\n become_user: root\n verbosity: <% $.ansible_playbook_verbosity %>\n ssh_private_key: <% $.private_key %>\n skip_tags: <% $.ansible_skip_tags %>\n extra_env_variables: <% $.ansible_extra_env_variables.mergeWith($.ansible_env_variables) %>\n extra_vars:\n ireallymeanit: 'yes'\n publish:\n output: <% task().result %>\n on-complete: purge_fetch_directory\n purge_fetch_directory:\n action: tripleo.files.remove_temp_dir path=<% $.fetch_directory %>\n", "name": "tripleo.storage.v1.ceph-install", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "ansible_playbook_verbosity=0, ansible_skip_tags=package-install,with_pkg, ansible_env_variables={}, ansible_extra_env_variables={u'ANSIBLE_LIBRARY': u'/usr/share/ceph-ansible/library/', u'ANSIBLE_RETRY_FILES_ENABLED': u'False', u'ANSIBLE_CONFIG': u'/usr/share/ceph-ansible/ansible.cfg', u'ANSIBLE_LOG_PATH': u'/var/log/mistral/ceph-install-workflow.log', u'DEFAULT_FORKS': u'25', u'ANSIBLE_ROLES_PATH': u'/usr/share/ceph-ansible/roles/', u'ANSIBLE_ACTION_PLUGINS': u'/usr/share/ceph-ansible/plugins/actions/', u'ANSIBLE_SSH_RETRIES': u'3', u'ANSIBLE_HOST_KEY_CHECKING': u'False'}, ceph_ansible_extra_vars={}, ceph_ansible_playbook=/usr/share/ceph-ansible/site-docker.yml.sample, node_data_lookup={}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "41900dcf-1004-4e3a-9177-5bf8a9a5016b"}, {"definition": "provide_manageable_nodes:\n description: Provide all nodes in a 'manageable' state.\n\n input:\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_manageable_nodes:\n action: ironic.node_list maintenance=False associated=False\n on-success: provide_manageable\n on-error: set_status_failed_get_manageable_nodes\n publish:\n managed_nodes: <% task().result.where($.provision_state = 'manageable').uuid %>\n\n set_status_failed_get_manageable_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_manageable_nodes).result %>\n\n provide_manageable:\n on-success: send_message\n workflow: tripleo.baremetal.v1.provide\n input:\n node_uuids: <% $.managed_nodes %>\n queue_name: <% $.queue_name %>\n publish:\n status: SUCCESS\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.provide_manageable_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1.provide_manageable_nodes", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "4d23233b-6bae-4fec-b3f5-f113ac48cb13"}, {"definition": "delete_node:\n description: deletes given overcloud nodes and updates the stack\n\n input:\n - container\n - nodes\n - timeout: 240\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n delete_node:\n action: tripleo.scale.delete_node nodes=<% $.nodes %> timeout=<% $.timeout %> container=<% $.container %>\n on-success: wait_for_stack_in_progress\n on-error: set_delete_node_failed\n\n set_delete_node_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(delete_node).result %>\n\n wait_for_stack_in_progress:\n workflow: tripleo.stack.v1.wait_for_stack_in_progress stack=<% $.container %>\n on-success: wait_for_stack_complete\n on-error: wait_for_stack_in_progress_failed\n\n wait_for_stack_in_progress_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(wait_for_stack_in_progress).result %>\n\n wait_for_stack_complete:\n workflow: tripleo.stack.v1.wait_for_stack_complete_or_failed stack=<% $.container %>\n on-success: send_message\n on-error: wait_for_stack_complete_failed\n\n wait_for_stack_complete_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(wait_for_stack_complete).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.scale.v1.delete_node\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.scale.v1.delete_node", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "container, nodes, timeout=240, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "5253f511-e4a1-4b25-bfdb-403a2cd65098"}, {"definition": "create_raid_configuration:\n description: Create and apply RAID configuration for given nodes\n input:\n - node_uuids\n - configuration\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_configuration:\n with-items: node_uuid in <% $.node_uuids %>\n action: ironic.node_set_target_raid_config node_ident=<% $.node_uuid %> target_raid_config=<% $.configuration %>\n on-success: apply_configuration\n on-error: set_configuration_failed\n\n set_configuration_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_configuration).result %>\n\n apply_configuration:\n with-items: node_uuid in <% $.node_uuids %>\n workflow: tripleo.baremetal.v1.manual_cleaning\n input:\n node_uuid: <% $.node_uuid %>\n clean_steps:\n - interface: raid\n step: delete_configuration\n - interface: raid\n step: create_configuration\n timeout: 1800 # building RAID should be fast than general cleaning\n retry_count: 180\n retry_delay: 10\n on-success: send_message\n on-error: apply_configuration_failed\n publish:\n message: <% task().result %>\n status: SUCCESS\n\n apply_configuration_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(apply_configuration).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.create_raid_configuration\n payload:\n status: <% $.get('status', 'FAILED') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1.create_raid_configuration", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "node_uuids, configuration, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "6191dd6e-d663-4eea-aa3a-859ec90c6aaf"}, {"definition": "discover_nodes:\n description: Run nodes discovery over the given IP range\n\n input:\n - ip_addresses\n - credentials\n - ports: [623]\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_all_nodes:\n action: ironic.node_list\n input:\n fields: [\"uuid\", \"driver\", \"driver_info\"]\n limit: 0\n on-success: get_candidate_nodes\n on-error: get_all_nodes_failed\n publish:\n existing_nodes: <% task().result %>\n\n get_all_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_all_nodes).result %>\n\n get_candidate_nodes:\n action: tripleo.baremetal.get_candidate_nodes\n input:\n ip_addresses: <% $.ip_addresses %>\n credentials: <% $.credentials %>\n ports: <% $.ports %>\n existing_nodes: <% $.existing_nodes %>\n on-success: probe_nodes\n on-error: get_candidate_nodes_failed\n publish:\n candidates: <% task().result %>\n\n get_candidate_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_candidate_nodes).result %>\n\n probe_nodes:\n action: tripleo.baremetal.probe_node\n on-success: send_message\n on-error: probe_nodes_failed\n input:\n ip: <% $.node.ip %>\n port: <% $.node.port %>\n username: <% $.node.username %>\n password: <% $.node.password %>\n with-items:\n - node in <% $.candidates %>\n publish:\n nodes_json: <% task().result.where($ != null) %>\n\n probe_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(probe_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.discover_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n nodes_json: <% $.get('nodes_json', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1.discover_nodes", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "ip_addresses, credentials, ports=[623], queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "635aa9ba-a5f9-44e3-bcde-4128bf213de4"}, {"definition": "register_or_update:\n description: Take nodes JSON and create nodes in a \"manageable\" state\n\n input:\n - nodes_json\n - remove: False\n - queue_name: tripleo\n - kernel_name: null\n - ramdisk_name: null\n - instance_boot_option: local\n - initial_state: manageable\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n validate_input:\n workflow: tripleo.baremetal.v1.validate_nodes\n on-success: register_or_update_nodes\n on-error: validation_failed\n input:\n nodes_json: <% $.nodes_json %>\n queue_name: <% $.queue_name %>\n\n validation_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(validate_input).result %>\n registered_nodes: []\n\n register_or_update_nodes:\n action: tripleo.baremetal.register_or_update_nodes\n on-success:\n - set_nodes_managed: <% $.initial_state != \"enroll\" %>\n - send_message: <% $.initial_state = \"enroll\" %>\n on-error: set_status_failed_register_or_update_nodes\n input:\n nodes_json: <% $.nodes_json %>\n remove: <% $.remove %>\n kernel_name: <% $.kernel_name %>\n ramdisk_name: <% $.ramdisk_name %>\n instance_boot_option: <% $.instance_boot_option %>\n publish:\n registered_nodes: <% task().result %>\n new_nodes: <% task().result.where($.provision_state = 'enroll') %>\n\n set_status_failed_register_or_update_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(register_or_update_nodes).result %>\n registered_nodes: []\n\n set_nodes_managed:\n on-success:\n - set_nodes_available: <% $.initial_state = \"available\" %>\n - send_message: <% $.initial_state != \"available\" %>\n on-error: set_status_failed_nodes_managed\n workflow: tripleo.baremetal.v1.manage\n input:\n node_uuids: <% $.new_nodes.uuid %>\n queue_name: <% $.queue_name %>\n publish:\n status: SUCCESS\n message: <% $.new_nodes.len() %> node(s) successfully moved to the \"manageable\" state.\n\n set_status_failed_nodes_managed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_nodes_managed).result %>\n\n set_nodes_available:\n on-success: send_message\n on-error: set_status_failed_nodes_available\n workflow: tripleo.baremetal.v1.provide node_uuids=<% $.new_nodes.uuid %> queue_name=<% $.queue_name %>\n publish:\n status: SUCCESS\n message: <% $.new_nodes.len() %> node(s) successfully moved to the \"available\" state.\n\n set_status_failed_nodes_available:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_nodes_available).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.register_or_update\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n registered_nodes: <% $.registered_nodes or [] %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1.register_or_update", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "nodes_json, remove=False, queue_name=tripleo, kernel_name=None, ramdisk_name=None, instance_boot_option=local, initial_state=manageable", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "76471f08-b4db-4f25-a1ce-ba954a0fca5d"}, {"definition": "cellv2_discovery:\n description: Run cell_v2 host discovery\n\n input:\n - node_uuids\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n cell_v2_discover_hosts:\n on-success: wait_for_nova_resources\n on-error: cell_v2_discover_hosts_failed\n action: tripleo.baremetal.cell_v2_discover_hosts\n\n cell_v2_discover_hosts_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(cell_v2_discover_hosts).result %>\n\n wait_for_nova_resources:\n on-success: send_message\n on-error: wait_for_nova_resources_failed\n with-items: node_uuid in <% $.node_uuids %>\n action: nova.hypervisors_find hypervisor_hostname=<% $.node_uuid %>\n\n wait_for_nova_resources_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(wait_for_nova_resources).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.cellv2_discovery\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1.cellv2_discovery", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "node_uuids, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "77b52b1d-d67d-4f18-8b52-40b26a632a9c"}, {"definition": "configure_manageable_nodes:\n description: Update the boot configuration of all nodes in 'manageable' state.\n\n input:\n - queue_name: tripleo\n - kernel_name: 'bm-deploy-kernel'\n - ramdisk_name: 'bm-deploy-ramdisk'\n - instance_boot_option: null\n - root_device: null\n - root_device_minimum_size: 4\n - overwrite_root_device_hints: False\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_manageable_nodes:\n action: ironic.node_list maintenance=False associated=False\n on-success: configure_manageable\n on-error: set_status_failed_get_manageable_nodes\n publish:\n managed_nodes: <% task().result.where($.provision_state = 'manageable').uuid %>\n\n configure_manageable:\n on-success: send_message\n on-error: set_status_failed_configure_manageable\n workflow: tripleo.baremetal.v1.configure\n input:\n node_uuids: <% $.managed_nodes %>\n queue_name: <% $.queue_name %>\n kernel_name: <% $.kernel_name %>\n ramdisk_name: <% $.ramdisk_name %>\n instance_boot_option: <% $.instance_boot_option %>\n root_device: <% $.root_device %>\n root_device_minimum_size: <% $.root_device_minimum_size %>\n overwrite_root_device_hints: <% $.overwrite_root_device_hints %>\n publish:\n message: 'Manageable nodes configured successfully.'\n\n set_status_failed_configure_manageable:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(configure_manageable).result %>\n\n set_status_failed_get_manageable_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_manageable_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.configure_manageable_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1.configure_manageable_nodes", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "queue_name=tripleo, kernel_name=bm-deploy-kernel, ramdisk_name=bm-deploy-ramdisk, instance_boot_option=None, root_device=None, root_device_minimum_size=4, overwrite_root_device_hints=False", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "837c5bb4-d7dc-4b83-bc59-5080a38303ef"}, {"definition": "manage:\n description: Set a list of nodes to 'manageable' state\n\n input:\n - node_uuids\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_nodes_manageable:\n on-success: send_message\n on-error: set_status_failed_nodes_manageable\n with-items: uuid in <% $.node_uuids %>\n workflow: tripleo.baremetal.v1.set_node_state\n input:\n node_uuid: <% $.uuid %>\n state_action: 'manage'\n target_state: 'manageable'\n error_states:\n # node going back to enroll designates power credentials failure\n - 'enroll'\n - 'error'\n\n set_status_failed_nodes_manageable:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_nodes_manageable).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.manage\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1.manage", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "node_uuids, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "b0c3c7c1-f5dd-448e-919e-6c5a1afb4d49"}, {"definition": "provide:\n description: Take a list of nodes and move them to \"available\"\n\n input:\n - node_uuids\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_nodes_available:\n on-success: cell_v2_discover_hosts\n on-error: set_status_failed_nodes_available\n with-items: uuid in <% $.node_uuids %>\n workflow: tripleo.baremetal.v1.set_node_state\n input:\n node_uuid: <% $.uuid %>\n queue_name: <% $.queue_name %>\n state_action: 'provide'\n target_state: 'available'\n\n set_status_failed_nodes_available:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_nodes_available).result %>\n\n cell_v2_discover_hosts:\n on-success: try_power_off\n on-error: cell_v2_discover_hosts_failed\n workflow: tripleo.baremetal.v1.cellv2_discovery\n input:\n node_uuids: <% $.node_uuids %>\n queue_name: <% $.queue_name %>\n timeout: 900 #15 minutes\n retry:\n delay: 30\n count: 30\n\n cell_v2_discover_hosts_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(cell_v2_discover_hosts).result %>\n\n try_power_off:\n on-success: send_message\n on-error: power_off_failed\n with-items: uuid in <% $.node_uuids %>\n workflow: tripleo.baremetal.v1.set_power_state\n input:\n node_uuid: <% $.uuid %>\n queue_name: <% $.queue_name %>\n state_action: 'off'\n target_state: 'power off'\n publish:\n status: SUCCESS\n message: <% $.node_uuids.len() %> node(s) successfully moved to the \"available\" state.\n\n power_off_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(try_power_off).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.provide\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1.provide", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "node_uuids, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "c42046d5-9088-4b75-8f77-272ee67697a3"}, {"definition": "introspect_manageable_nodes:\n description: Introspect all nodes in a 'manageable' state.\n\n input:\n - run_validations: False\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_manageable_nodes:\n action: ironic.node_list maintenance=False associated=False\n on-success: validate_nodes\n on-error: set_status_failed_get_manageable_nodes\n publish:\n managed_nodes: <% task().result.where($.provision_state = 'manageable').uuid %>\n\n set_status_failed_get_manageable_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_manageable_nodes).result %>\n\n validate_nodes:\n on-success:\n - introspect_manageable: <% $.managed_nodes.len() > 0 %>\n - set_status_failed_no_nodes: <% $.managed_nodes.len() = 0 %>\n\n set_status_failed_no_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: No manageable nodes to introspect. Check node states and maintenance.\n\n introspect_manageable:\n on-success: send_message\n on-error: set_status_introspect_manageable\n workflow: tripleo.baremetal.v1.introspect\n input:\n node_uuids: <% $.managed_nodes %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n introspected_nodes: <% task().result.introspected_nodes %>\n\n set_status_introspect_manageable:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(introspect_manageable).result %>\n introspected_nodes: []\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.introspect_manageable_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n introspected_nodes: <% $.get('introspected_nodes', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1.introspect_manageable_nodes", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "run_validations=False, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "c8d18988-0302-4fc1-839a-8c7e178433de"}, {"definition": "validate_nodes:\n description: Validate nodes JSON\n\n input:\n - nodes_json\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n validate_nodes:\n action: tripleo.baremetal.validate_nodes\n on-success: send_message\n on-error: validation_failed\n input:\n nodes_json: <% $.nodes_json %>\n\n validation_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(validate_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.validate_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1.validate_nodes", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "nodes_json, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "d12e4f37-5a66-4414-bfba-ef5cd0efa062"}, {"definition": "_introspect:\n description: >\n An internal workflow. The tripleo.baremetal.v1.introspect workflow\n should be used for introspection.\n\n input:\n - node_uuid\n - timeout\n - queue_name\n\n output:\n result: <% task(start_introspection).result %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n start_introspection:\n action: baremetal_introspection.introspect uuid=<% $.node_uuid %>\n on-success: wait_for_introspection_to_finish\n on-error: set_status_failed_start_introspection\n\n set_status_failed_start_introspection:\n publish:\n status: FAILED\n message: <% task(start_introspection).result %>\n introspected_nodes: []\n on-success: send_message\n\n wait_for_introspection_to_finish:\n action: baremetal_introspection.wait_for_finish\n input:\n uuids: <% [$.node_uuid] %>\n # The interval is 10 seconds, so divide to make the overall timeout\n # in seconds correct.\n max_retries: <% $.timeout / 10 %>\n retry_interval: 10\n publish:\n introspected_node: <% task().result.values().first() %>\n status: <% bool(task().result.values().first().error) and \"FAILED\" or \"SUCCESS\" %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-success: wait_for_introspection_to_finish_success\n on-error: wait_for_introspection_to_finish_error\n\n wait_for_introspection_to_finish_success:\n publish:\n message: <% \"Introspection of node {0} completed. Status:{1}. Errors:{2}\".format($.introspected_node.uuid, $.status, $.introspected_node.error) %>\n on-success: send_message\n\n wait_for_introspection_to_finish_error:\n publish:\n message: <% \"Introspection of node {0} timed out.\".format($.node_uuid) %>\n on-success: send_message\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1._introspect\n payload:\n status: <% $.status %>\n message: <% $.message %>\n introspected_node: <% $.get('introspected_node') %>\n node_uuid: <% $.node_uuid %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1._introspect", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "node_uuid, timeout, queue_name", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "d150f298-54c6-4e1a-9931-8dcfdad79de8"}, {"definition": "configure:\n description: Take a list of manageable nodes and update their boot configuration.\n\n input:\n - node_uuids\n - queue_name: tripleo\n - kernel_name: bm-deploy-kernel\n - ramdisk_name: bm-deploy-ramdisk\n - instance_boot_option: null\n - root_device: null\n - root_device_minimum_size: 4\n - overwrite_root_device_hints: False\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n configure_boot:\n on-success: configure_root_device\n on-error: set_status_failed_configure_boot\n with-items: node_uuid in <% $.node_uuids %>\n action: tripleo.baremetal.configure_boot node_uuid=<% $.node_uuid %> kernel_name=<% $.kernel_name %> ramdisk_name=<% $.ramdisk_name %> instance_boot_option=<% $.instance_boot_option %>\n\n configure_root_device:\n on-success: send_message\n on-error: set_status_failed_configure_root_device\n with-items: node_uuid in <% $.node_uuids %>\n action: tripleo.baremetal.configure_root_device node_uuid=<% $.node_uuid %> root_device=<% $.root_device %> minimum_size=<% $.root_device_minimum_size %> overwrite=<% $.overwrite_root_device_hints %>\n publish:\n status: SUCCESS\n message: 'Successfully configured the nodes.'\n\n set_status_failed_configure_boot:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(configure_boot).result %>\n\n set_status_failed_configure_root_device:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(configure_root_device).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.configure\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1.configure", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "node_uuids, queue_name=tripleo, kernel_name=bm-deploy-kernel, ramdisk_name=bm-deploy-ramdisk, instance_boot_option=None, root_device=None, root_device_minimum_size=4, overwrite_root_device_hints=False", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "d31ef193-2ba7-440b-b63e-da41b4b57a21"}, {"definition": "manual_cleaning:\n input:\n - node_uuid\n - clean_steps\n - timeout: 7200 # 2 hours (cleaning can take really long)\n - retry_delay: 10\n - retry_count: 720\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_provision_state:\n on-success: wait_for_provision_state\n on-error: set_provision_state_failed\n action: ironic.node_set_provision_state node_uuid=<% $.node_uuid %> state='clean' cleansteps=<% $.clean_steps %>\n\n set_provision_state_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_provision_state).result %>\n\n wait_for_provision_state:\n on-success: send_message\n action: ironic.node_get node_id=<% $.node_uuid %>\n timeout: <% $.timeout %>\n retry:\n delay: <% $.retry_delay %>\n count: <% $.retry_count %>\n continue-on: <% task().result.provision_state != 'manageable' %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.manual_cleaning\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1.manual_cleaning", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "node_uuid, clean_steps, timeout=7200, retry_delay=10, retry_count=720, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "e2e4f9e6-f303-4262-8956-f5157c5869bc"}, {"definition": "introspect:\n description: >\n Take a list of nodes and move them through introspection.\n\n By default each node will attempt introspection up to 3 times (two\n retries plus the initial attemp) if it fails. This behaviour can be\n modified by changing the max_retry_attempts input.\n\n The workflow will assume the node has timed out after 20 minutes (1200\n seconds). This can be changed by passing the node_timeout input in\n seconds.\n\n input:\n - node_uuids\n - run_validations: False\n - queue_name: tripleo\n - concurrency: 20\n - max_retry_attempts: 2\n - node_timeout: 1200\n\n tags:\n - tripleo-common-managed\n\n task-defaults:\n on-error: unhandled_error\n\n tasks:\n initialize:\n publish:\n introspection_attempt: 1\n on-complete:\n - run_validations: <% $.run_validations %>\n - introspect_nodes: <% not $.run_validations %>\n\n run_validations:\n workflow: tripleo.validations.v1.run_groups\n input:\n group_names:\n - 'pre-introspection'\n queue_name: <% $.queue_name %>\n on-success: introspect_nodes\n on-error: set_validations_failed\n\n set_validations_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(run_validations).result %>\n\n introspect_nodes:\n with-items: uuid in <% $.node_uuids %>\n concurrency: <% $.concurrency %>\n workflow: _introspect\n input:\n node_uuid: <% $.uuid %>\n queue_name: <% $.queue_name %>\n timeout: <% $.node_timeout %>\n # on-error is triggered if one or more nodes failed introspection. We\n # still go to get_introspection_status as it will collect the result\n # for each node. Unless we hit the retry limit.\n on-error:\n - get_introspection_status: <% $.introspection_attempt <= $.max_retry_attempts %>\n - max_retry_attempts_reached: <% $.introspection_attempt > $.max_retry_attempts %>\n on-success: get_introspection_status\n\n get_introspection_status:\n with-items: uuid in <% $.node_uuids %>\n action: baremetal_introspection.get_status\n input:\n uuid: <% $.uuid %>\n publish:\n introspected_nodes: <% task().result.toDict($.uuid, $) %>\n # Currently there is no way for us to ignore user introspection\n # aborts. This means we will retry aborted nodes until the Ironic API\n # gives us more details (error code or a boolean to show aborts etc.)\n # If a node hasn't finished, we consider it to be failed.\n # TODO(d0ugal): When possible, don't retry introspection of nodes\n # that a user manually aborted.\n failed_introspection: <% task().result.where($.finished = true and $.error != null).select($.uuid) + task().result.where($.finished = false).select($.uuid) %>\n publish-on-error:\n # If a node fails to start introspection, getting the status can fail.\n # When that happens, the result is a string and the nodes need to be\n # filtered out.\n introspected_nodes: <% task().result.where(isDict($)).toDict($.uuid, $) %>\n # If there was an error, the exception string we get doesn't give us\n # the UUID. So we use a set difference to find the UUIDs missing in\n # the results. These are then added to the failed nodes.\n failed_introspection: <% ($.node_uuids.toSet() - task().result.where(isDict($)).select($.uuid).toSet()) + task().result.where(isDict($)).where($.finished = true and $.error != null).toSet() + task().result.where(isDict($)).where($.finished = false).toSet() %>\n on-error: increase_attempt_counter\n on-success:\n - successful_introspection: <% $.failed_introspection.len() = 0 %>\n - increase_attempt_counter: <% $.failed_introspection.len() > 0 %>\n\n increase_attempt_counter:\n publish:\n introspection_attempt: <% $.introspection_attempt + 1 %>\n on-complete:\n retry_failed_nodes\n\n retry_failed_nodes:\n publish:\n status: RUNNING\n message: <% 'Retrying {0} nodes that failed introspection. Attempt {1} of {2} '.format($.failed_introspection.len(), $.introspection_attempt, $.max_retry_attempts + 1) %>\n # We are about to retry, update the tracking stats.\n node_uuids: <% $.failed_introspection %>\n on-success:\n - send_message\n - introspect_nodes\n\n max_retry_attempts_reached:\n publish:\n status: FAILED\n message: <% 'Retry limit reached with {0} nodes still failing introspection'.format($.failed_introspection.len()) %>\n on-complete: send_message\n\n successful_introspection:\n publish:\n status: SUCCESS\n message: Successfully introspected <% $.introspected_nodes.len() %> node(s).\n on-complete: send_message\n\n unhandled_error:\n publish:\n status: FAILED\n message: \"Unhandled workflow error\"\n on-complete: send_message\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.introspect\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n introspected_nodes: <% $.get('introspected_nodes', []) %>\n failed_introspection: <% $.get('failed_introspection', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1.introspect", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "node_uuids, run_validations=False, queue_name=tripleo, concurrency=20, max_retry_attempts=2, node_timeout=1200", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "e58213e1-54a5-4138-b254-6a62bbaebbb8"}, {"definition": "tag_nodes:\n description: Runs the tag_node workflow in a loop\n input:\n - tag_node_uuids\n - untag_node_uuids\n - role\n - plan: overcloud\n - queue_name: tripleo\n\n task-defaults:\n on-error: send_message\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n tag_nodes:\n with-items: node_uuid in <% $.tag_node_uuids %>\n workflow: tripleo.baremetal.v1.tag_node\n input:\n node_uuid: <% $.node_uuid %>\n queue_name: <% $.queue_name %>\n role: <% $.role %>\n concurrency: 1\n on-success: untag_nodes\n\n untag_nodes:\n with-items: node_uuid in <% $.untag_node_uuids %>\n workflow: tripleo.baremetal.v1.tag_node\n input:\n node_uuid: <% $.node_uuid %>\n queue_name: <% $.queue_name %>\n concurrency: 1\n on-success: update_role_parameters\n\n update_role_parameters:\n on-success: send_message\n action: tripleo.parameters.update_role role=<% $.role %> container=<% $.plan %>\n publish:\n message: <% task().result %>\n status: SUCCESS\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.tag_nodes\n payload:\n status: <% $.get('status', 'FAILED') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1.tag_nodes", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:44", "namespace": "", "updated_at": null, "scope": "private", "input": "tag_node_uuids, untag_node_uuids, role, plan=overcloud, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "e93c0e92-16c2-43e4-b4bb-53bbd589ede2"}, {"definition": "validate_networks_input:\n description: >\n Validate that required fields are present.\n\n input:\n - networks\n - queue_name: tripleo\n\n output:\n result: <% task(validate_network_names).result %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n validate_network_names:\n publish:\n network_name_present: <% $.networks.all($.containsKey('name')) %>\n on-success:\n - set_status_success: <% $.network_name_present = true %>\n - set_status_error: <% $.network_name_present = false %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(validate_network_names).result %>\n\n set_status_error:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: \"One or more entries did not contain the required field 'name'\"\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.networks.v1.validate_networks_input\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.networks.v1.validate_networks_input", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:45", "namespace": "", "updated_at": null, "scope": "private", "input": "networks, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "58019ccc-0aa0-4424-8596-0d90fa253367"}, {"definition": "rotate_fernet_keys:\n\n input:\n - container\n - queue_name: tripleo\n - ansible_extra_env_variables:\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n rotate_keys:\n action: tripleo.parameters.rotate_fernet_keys container=<% $.container %>\n on-success: deploy_ssh_key\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n deploy_ssh_key:\n workflow: tripleo.validations.v1.copy_ssh_key\n on-success: get_privkey\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_privkey:\n action: tripleo.validations.get_privkey\n on-success: deploy_keys\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n deploy_keys:\n action: tripleo.ansible-playbook\n input:\n hosts: keystone\n inventory: /usr/bin/tripleo-ansible-inventory\n ssh_private_key: <% task(get_privkey).result %>\n extra_env_variables: <% $.ansible_extra_env_variables + dict(TRIPLEO_PLAN_NAME=>$.container) %>\n verbosity: 0\n remote_user: heat-admin\n become: true\n extra_vars:\n fernet_keys: <% task(rotate_keys).result %>\n use_openstack_credentials: true\n playbook: /usr/share/tripleo-common/playbooks/rotate-keys.yaml\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.fernet_keys.v1.rotate_fernet_keys\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.fernet_keys.v1.rotate_fernet_keys", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:45", "namespace": "", "updated_at": null, "scope": "private", "input": "container, queue_name=tripleo, ansible_extra_env_variables={u'ANSIBLE_HOST_KEY_CHECKING': u'False'}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "77970bb0-6656-4647-9fa9-1e6f497e23c8"}, {"definition": "rebalance:\n tags:\n - tripleo-common-managed\n\n tasks:\n get_private_key:\n action: tripleo.validations.get_privkey\n on-success: deploy_rings\n\n deploy_rings:\n action: tripleo.ansible-playbook\n publish:\n output: <% task().result %>\n input:\n ssh_private_key: <% task(get_private_key).result %>\n ssh_common_args: '-o StrictHostKeyChecking=no'\n ssh_extra_args: '-o UserKnownHostsFile=/dev/null'\n verbosity: 1\n remote_user: heat-admin\n become: true\n become_user: root\n playbook: /usr/share/tripleo-common/playbooks/swift_ring_rebalance.yaml\n inventory: /usr/bin/tripleo-ansible-inventory\n use_openstack_credentials: true\n", "name": "tripleo.swift_ring.v1.rebalance", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:45", "namespace": "", "updated_at": null, "scope": "private", "input": "", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "e2e2469a-6d71-4bdd-a170-23c165454357"}, {"definition": "update_networks:\n description: >\n Takes data in networks parameter in json format, validates its contents,\n and persists them in network_data.yaml. After successful update,\n templates are regenerated.\n\n input:\n - container: overcloud\n - networks\n - network_data_file: 'network_data.yaml'\n - queue_name: tripleo\n\n output:\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n validate_input:\n description: >\n validate the format of input (input includes required fields for\n each network)\n workflow: validate_networks_input\n input:\n networks: <% $.networks %>\n on-success: validate_network_files\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n validate_network_files:\n description: >\n validate that Network names exist in Swift container\n workflow: tripleo.plan_management.v1.validate_network_files\n input:\n container: <% $.container %>\n network_data: <% $.networks %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().network_data %>\n on-success: get_available_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n get_available_networks:\n workflow: tripleo.plan_management.v1.list_available_networks\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n publish:\n available_networks: <% task().result.available_networks %>\n on-success: get_current_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n get_current_networks:\n workflow: tripleo.plan_management.v1.get_network_data\n input:\n container: <% $.container %>\n network_data_file: <% $.network_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n current_networks: <% task().result.network_data %>\n on-success: update_network_data\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n update_network_data:\n description: >\n Combine (or replace) the network data\n action: tripleo.plan.update_networks\n input:\n networks: <% $.available_networks %>\n current_networks: <% $.current_networks %>\n remove_all: false\n publish:\n new_network_data: <% task().result.network_data %>\n on-success: update_network_data_in_swift\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n update_network_data_in_swift:\n description: >\n update network_data.yaml object in Swift with data from workflow input\n action: swift.put_object\n input:\n container: <% $.container %>\n obj: <% $.network_data_file %>\n contents: <% yaml_dump($.new_network_data) %>\n on-success: regenerate_templates\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n regenerate_templates:\n action: tripleo.templates.process container=<% $.container %>\n on-success: get_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n get_networks:\n description: >\n run GetNetworksAction to get updated contents of network_data.yaml and\n provide it as output\n workflow: tripleo.plan_management.v1.get_network_data\n input:\n container: <% $.container %>\n network_data_file: <% $.network_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().network_data %>\n on-success: set_status_success\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(get_networks).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.networks.v1.update_networks\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.networks.v1.update_networks", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:45", "namespace": "", "updated_at": null, "scope": "private", "input": "container=overcloud, networks, network_data_file=network_data.yaml, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "ee13ce45-9163-4455-a844-70c174d81b78"}, {"definition": "package_update_plan:\n description: Take a container and perform a package update with possible breakpoints\n\n input:\n - container\n - container_registry\n - ceph_ansible_playbook\n - timeout: 240\n - queue_name: tripleo\n - skip_deploy_identifier: False\n - config_dir: '/tmp/'\n\n tags:\n - tripleo-common-managed\n\n tasks:\n update:\n action: tripleo.package_update.update_stack\n input:\n timeout: <% $.timeout %>\n container: <% $.container %>\n container_registry: <% $.container_registry %>\n ceph_ansible_playbook: <% $.ceph_ansible_playbook %>\n on-success: clean_plan\n on-error: set_update_failed\n\n clean_plan:\n action: tripleo.plan.update_plan_environment\n input:\n container: <% $.container %>\n parameter: CephAnsiblePlaybook\n env_key: parameter_defaults\n delete: true\n on-success: send_message\n on-error: set_update_failed\n\n\n set_update_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(update).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.package_update_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.package_update.v1.package_update_plan", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:46", "namespace": "", "updated_at": null, "scope": "private", "input": "container, container_registry, ceph_ansible_playbook, timeout=240, queue_name=tripleo, skip_deploy_identifier=False, config_dir=/tmp/", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "0ed729aa-3117-47bd-ac1d-924388300563"}, {"definition": "skydive_install:\n # allows for additional extra_vars via workflow input\n input:\n - ansible_playbook_verbosity: 0\n - ansible_extra_env_variables:\n ANSIBLE_ROLES_PATH: /usr/share/skydive-ansible/roles/\n ANSIBLE_RETRY_FILES_ENABLED: 'False'\n ANSIBLE_LOG_PATH: /var/log/mistral/skydive-install-workflow.log\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n - skydive_ansible_extra_vars: {}\n - skydive_ansible_playbook: /usr/share/skydive-ansible/playbook.yml.sample\n tags:\n - tripleo-common-managed\n tasks:\n set_blacklisted_ips:\n publish:\n blacklisted_ips: <% env().get('blacklisted_ip_addresses', []) %>\n on-success: set_ip_lists\n set_ip_lists:\n publish:\n agent_ips: <% let(root => $) -> env().get('service_ips', {}).get('skydive_agent_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n analyzer_ips: <% let(root => $) -> env().get('service_ips', {}).get('skydive_analyzer_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n on-success: enable_ssh_admin\n enable_ssh_admin:\n workflow: tripleo.access.v1.enable_ssh_admin\n input:\n ssh_servers: <% ($.agent_ips + $.analyzer_ips).toSet() %>\n on-success: get_private_key\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: set_fork_count\n set_fork_count:\n publish: # unique list of all IPs: make each list a set, take unions and count\n fork_count: <% min($.agent_ips.toSet().union($.analyzer_ips.toSet()).count(), 100) %> # don't use >100 forks\n on-success: set_role_vars\n set_role_vars:\n publish:\n # NOTE(sbaubeau): collect role settings from all tht roles\n agent_vars: <% env().get('role_merged_configs', {}).values().select($.get('skydive_agent_ansible_vars', {})).aggregate($1 + $2) %>\n analyzer_vars: <% env().get('role_merged_configs', {}).values().select($.get('skydive_analyzer_ansible_vars', {})).aggregate($1 + $2) %>\n on-success: build_extra_vars\n build_extra_vars:\n publish:\n # NOTE(sbaubeau): merge vars from all ansible roles\n extra_vars: <% $.agent_vars + $.analyzer_vars + $.skydive_ansible_extra_vars %>\n on-success: skydive_install\n skydive_install:\n action: tripleo.ansible-playbook\n input:\n inventory:\n agents:\n hosts: <% $.agent_ips.toDict($, {}) %>\n analyzers:\n hosts: <% $.analyzer_ips.toDict($, {}) %>\n playbook: <% $.skydive_ansible_playbook %>\n remote_user: tripleo-admin\n become: true\n become_user: root\n verbosity: <% $.ansible_playbook_verbosity %>\n forks: <% $.fork_count %>\n ssh_private_key: <% $.private_key %>\n extra_env_variables: <% $.ansible_extra_env_variables %>\n extra_vars: <% $.extra_vars %>\n publish:\n output: <% task().result %>\n", "name": "tripleo.skydive_ansible.v1.skydive_install", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:46", "namespace": "", "updated_at": null, "scope": "private", "input": "ansible_playbook_verbosity=0, ansible_extra_env_variables={u'ANSIBLE_ROLES_PATH': u'/usr/share/skydive-ansible/roles/', u'ANSIBLE_HOST_KEY_CHECKING': u'False', u'ANSIBLE_RETRY_FILES_ENABLED': u'False', u'ANSIBLE_LOG_PATH': u'/var/log/mistral/skydive-install-workflow.log'}, skydive_ansible_extra_vars={}, skydive_ansible_playbook=/usr/share/skydive-ansible/playbook.yml.sample", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "2d9a6c9b-1e48-4237-aa3b-079217852621"}, {"definition": "get_config:\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_config:\n action: tripleo.config.get_overcloud_config container=<% $.container %>\n publish:\n status: SUCCESS\n message: <% task().result %>\n publish-on-error:\n status: FAILED\n message: Init Minor update failed\n on-complete: send_message\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.package_update_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.package_update.v1.get_config", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:46", "namespace": "", "updated_at": null, "scope": "private", "input": "container, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "4f714858-b79a-44ae-84cc-772de7239d48"}, {"definition": "ffwd_upgrade_converge_plan:\n description: ffwd-upgrade converge removes DeploymentSteps no-op from plan\n\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n remove_noop:\n action: tripleo.plan.remove_noop_deploystep\n input:\n container: <% $.container %>\n on-success: send_message\n on-error: set_update_failed\n\n set_update_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(remove_noop).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.ffwd_upgrade_converge_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.package_update.v1.ffwd_upgrade_converge_plan", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:46", "namespace": "", "updated_at": null, "scope": "private", "input": "container, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "67d9bd7e-3030-4b32-98e6-5042edfc9da5"}, {"definition": "converge_upgrade_plan:\n description: Take a container and perform the converge step of a major upgrade\n\n input:\n - container\n - timeout: 240\n - queue_name: tripleo\n - skip_deploy_identifier: False\n\n tags:\n - tripleo-common-managed\n\n tasks:\n remove_noop:\n action: tripleo.plan.remove_noop_deploystep\n input:\n container: <% $.container %>\n on-success: send_message\n on-error: set_update_failed\n\n set_update_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(upgrade_converge).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.major_upgrade.v1.converge_upgrade_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.package_update.v1.converge_upgrade_plan", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:46", "namespace": "", "updated_at": null, "scope": "private", "input": "container, timeout=240, queue_name=tripleo, skip_deploy_identifier=False", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "699a2d82-6115-4381-9848-3756e7804454"}, {"definition": "update_converge_plan:\n description: Take a container and perform the converge for minor update\n\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n remove_noop:\n action: tripleo.plan.remove_noop_deploystep\n input:\n container: <% $.container %>\n on-success: send_message\n on-error: set_update_failed\n\n set_update_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(remove_noop).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.update_converge_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.package_update.v1.update_converge_plan", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:46", "namespace": "", "updated_at": null, "scope": "private", "input": "container, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "7bdcfd7c-e772-43b4-bbba-e630d4c5e1ad"}, {"definition": "update_nodes:\n description: Take a container and perform an update nodes by nodes\n\n input:\n - node_user: heat-admin\n - nodes\n - playbook\n - inventory_file\n - ansible_queue_name: tripleo\n - module_path: /usr/share/ansible-modules\n - ansible_extra_env_variables:\n ANSIBLE_LOG_PATH: /var/log/mistral/package_update.log\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n - verbosity: 1\n - work_dir: /var/lib/mistral\n - skip_tags: ''\n\n tags:\n - tripleo-common-managed\n\n tasks:\n download_config:\n action: tripleo.config.download_config\n input:\n work_dir: <% $.work_dir %>/<% execution().id %>\n on-success: get_private_key\n on-error: node_update_failed\n\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: node_update\n\n node_update:\n action: tripleo.ansible-playbook\n input:\n inventory: <% $.inventory_file %>\n playbook: <% $.work_dir %>/<% execution().id %>/<% $.playbook %>\n remote_user: <% $.node_user %>\n become: true\n become_user: root\n verbosity: <% $.verbosity %>\n ssh_private_key: <% $.private_key %>\n extra_env_variables: <% $.ansible_extra_env_variables %>\n limit_hosts: <% $.nodes %>\n module_path: <% $.module_path %>\n queue_name: <% $.ansible_queue_name %>\n execution_id: <% execution().id %>\n skip_tags: <% $.skip_tags %>\n trash_output: true\n on-success:\n - node_update_passed: <% task().result.returncode = 0 %>\n - node_update_failed: <% task().result.returncode != 0 %>\n on-error: node_update_failed\n publish:\n output: <% task().result %>\n\n node_update_passed:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: Updated nodes - <% $.nodes %>\n\n node_update_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: Failed to update nodes - <% $.nodes %>, please see the logs.\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.ansible_queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.update_nodes\n payload:\n status: <% $.status %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.package_update.v1.update_nodes", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:46", "namespace": "", "updated_at": null, "scope": "private", "input": "node_user=heat-admin, nodes, playbook, inventory_file, ansible_queue_name=tripleo, module_path=/usr/share/ansible-modules, ansible_extra_env_variables={u'ANSIBLE_HOST_KEY_CHECKING': u'False', u'ANSIBLE_LOG_PATH': u'/var/log/mistral/package_update.log'}, verbosity=1, work_dir=/var/lib/mistral, skip_tags=", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "8440ff57-7803-4545-9165-404a3f92a8a6"}, {"definition": "backup:\n description: This workflow will launch the Undercloud backup\n tags:\n - tripleo-common-managed\n input:\n - sources_path: '/home/stack/'\n - queue_name: tripleo\n tasks:\n # Action to know if there is enough available space\n # to run the Undercloud backup\n get_free_space:\n action: tripleo.undercloud.get_free_space\n publish:\n status: SUCCESS\n message: <% task().result %>\n free_space: <% task().result %>\n on-success: create_backup_dir\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # We create a temp directory to store the Undercloud\n # backup\n create_backup_dir:\n action: tripleo.undercloud.create_backup_dir\n publish:\n status: SUCCESS\n message: <% task().result %>\n backup_path: <% task().result %>\n on-success: get_database_credentials\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # The Undercloud database password for the root\n # user is stored in a Mistral environment, we\n # need the password in order to run the database dump\n get_database_credentials:\n action: mistral.environments_get name='tripleo.undercloud-config'\n publish:\n status: SUCCESS\n message: <% task().result %>\n undercloud_db_password: <% task(get_database_credentials).result.variables.undercloud_db_password %>\n on-success: create_database_backup\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # Run the DB dump of all the databases and store the result\n # in the temporary folder\n create_database_backup:\n input:\n path: <% $.backup_path.path %>\n dbuser: root\n dbpassword: <% $.undercloud_db_password %>\n action: tripleo.undercloud.create_database_backup\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: create_fs_backup\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # This action will run the fs backup\n create_fs_backup:\n input:\n sources_path: <% $.sources_path %>\n path: <% $.backup_path.path %>\n action: tripleo.undercloud.create_file_system_backup\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: upload_backup\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # This action will push the backup to swift\n upload_backup:\n input:\n backup_path: <% $.backup_path.path %>\n action: tripleo.undercloud.upload_backup_to_swift\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: cleanup_backup\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # This action will remove the backup temp folder\n cleanup_backup:\n input:\n path: <% $.backup_path.path %>\n action: tripleo.undercloud.remove_temp_dir\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: send_message\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # Sending a message to show that the backup finished\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.undercloud_backup.v1.launch\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n execution: <% execution() %>\n message: <% $.get('message', '') %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.undercloud_backup.v1.backup", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:46", "namespace": "", "updated_at": null, "scope": "private", "input": "sources_path=/home/stack/, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "f406d962-c292-4d1f-9c94-33892b6e5cc1"}, {"definition": "derive_parameters:\n description: The main workflow for deriving parameters from the introspected data\n\n input:\n - plan: overcloud\n - queue_name: tripleo\n - user_inputs: {}\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_flattened_parameters:\n action: tripleo.parameters.get_flatten container=<% $.plan %>\n publish:\n environment_parameters: <% task().result.environment_parameters %>\n heat_resource_tree: <% task().result.heat_resource_tree %>\n on-success:\n - get_roles: <% $.environment_parameters and $.heat_resource_tree %>\n - set_status_failed_get_flattened_parameters: <% (not $.environment_parameters) or (not $.heat_resource_tree) %>\n on-error: set_status_failed_get_flattened_parameters\n\n get_roles:\n action: tripleo.role.list container=<% $.plan %>\n publish:\n role_name_list: <% task().result %>\n on-success:\n - get_valid_roles: <% $.role_name_list %>\n - set_status_failed_get_roles: <% not $.role_name_list %>\n on-error: set_status_failed_on_error_get_roles\n\n # Obtain only the roles which has count > 0, by checking <RoleName>Count parameter, like ComputeCount\n get_valid_roles:\n publish:\n valid_role_name_list: <% let(hr => $.heat_resource_tree.parameters) -> $.role_name_list.where(int($hr.get(concat($, 'Count'), {}).get('default', 0)) > 0) %>\n on-success:\n - for_each_role: <% $.valid_role_name_list %>\n - set_status_failed_get_valid_roles: <% not $.valid_role_name_list %>\n\n # Execute the basic preparation workflow for each role to get introspection data\n for_each_role:\n with-items: role_name in <% $.valid_role_name_list %>\n concurrency: 1\n workflow: _derive_parameters_per_role\n input:\n plan: <% $.plan %>\n role_name: <% $.role_name %>\n environment_parameters: <% $.environment_parameters %>\n heat_resource_tree: <% $.heat_resource_tree %>\n user_inputs: <% $.user_inputs %>\n publish:\n # Gets all the roles derived parameters as dictionary\n result: <% task().result.select($.get('derived_parameters', {})).sum() %>\n on-success: reset_derive_parameters_in_plan\n on-error: set_status_failed_for_each_role\n\n reset_derive_parameters_in_plan:\n action: tripleo.parameters.reset\n input:\n container: <% $.plan %>\n key: 'derived_parameters'\n on-success:\n # Add the derived parameters to the deployment plan only when $.result\n # (the derived parameters) is non-empty. Otherwise, we're done.\n - update_derive_parameters_in_plan: <% $.result %>\n - send_message: <% not $.result %>\n on-error: set_status_failed_reset_derive_parameters_in_plan\n\n update_derive_parameters_in_plan:\n action: tripleo.parameters.update\n input:\n container: <% $.plan %>\n key: 'derived_parameters'\n parameters: <% $.get('result', {}) %>\n on-success: send_message\n on-error: set_status_failed_update_derive_parameters_in_plan\n\n set_status_failed_get_flattened_parameters:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_flattened_parameters).result %>\n\n set_status_failed_get_roles:\n on-success: send_message\n publish:\n status: FAILED\n message: \"Unable to determine the list of roles in the deployment plan\"\n\n set_status_failed_on_error_get_roles:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_roles).result %>\n\n set_status_failed_get_valid_roles:\n on-success: send_message\n publish:\n status: FAILED\n message: 'Unable to determine the list of valid roles in the deployment plan.'\n\n set_status_failed_for_each_role:\n on-success: update_message_format\n publish:\n status: FAILED\n # gets the status and message for all roles from task result.\n message: <% task(for_each_role).result.select(dict('role_name' => $.role_name, 'status' => $.get('status', 'SUCCESS'), 'message' => $.get('message', ''))) %>\n\n update_message_format:\n on-success: send_message\n publish:\n # updates the message format(Role 'role name': message) for each roles which are failed and joins the message list as string with ', ' separator.\n message: <% $.message.where($.get('status', 'SUCCESS') != 'SUCCESS').select(concat(\"Role '{}':\".format($.role_name), \" \", $.get('message', '(error unknown)'))).join(', ') %>\n\n set_status_failed_reset_derive_parameters_in_plan:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(reset_derive_parameters_in_plan).result %>\n\n set_status_failed_update_derive_parameters_in_plan:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(update_derive_parameters_in_plan).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.derive_params.v1.derive_parameters\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n result: <% $.get('result', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n", "name": "tripleo.derive_params.v1.derive_parameters", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:47", "namespace": "", "updated_at": null, "scope": "private", "input": "plan=overcloud, queue_name=tripleo, user_inputs={}", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "86375280-73b5-4d55-9cda-f7704cc70848"}, {"definition": "_derive_parameters_per_role:\n description: >\n Workflow which runs per role to get the introspection data on the first matching node assigned to role.\n Once introspection data is fetched, this worklow will trigger the actual derive parameters workflow\n input:\n - plan\n - role_name\n - environment_parameters\n - heat_resource_tree\n - user_inputs\n\n output:\n derived_parameters: <% $.get('derived_parameters', {}) %>\n # Need role_name in output parameter to display the status for all roles in main workflow when any role fails here.\n role_name: <% $.role_name %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_role_info:\n workflow: _get_role_info\n input:\n role_name: <% $.role_name %>\n heat_resource_tree: <% $.heat_resource_tree %>\n publish:\n role_features: <% task().result.get('role_features', []) %>\n role_services: <% task().result.get('role_services', []) %>\n on-success:\n # Continue only if there are features associated with this role. Otherwise, we're done.\n - get_flavor_name: <% $.role_features %>\n on-error: set_status_failed_get_role_info\n\n # Getting introspection data workflow, which will take care of\n # 1) profile and flavor based mapping\n # 2) Nova placement api based mapping\n # Currently we have implemented profile and flavor based mapping\n # TODO-Nova placement api based mapping is pending, we will enchance it later.\n get_flavor_name:\n publish:\n flavor_name: <% let(param_name => concat('Overcloud', $.role_name, 'Flavor').replace('OvercloudControllerFlavor', 'OvercloudControlFlavor')) -> $.heat_resource_tree.parameters.get($param_name, {}).get('default', '') %>\n on-success:\n - get_profile_name: <% $.flavor_name %>\n - set_status_failed_get_flavor_name: <% not $.flavor_name %>\n\n get_profile_name:\n action: tripleo.parameters.get_profile_of_flavor flavor_name=<% $.flavor_name %>\n publish:\n profile_name: <% task().result %>\n on-success: get_profile_node\n on-error: set_status_failed_get_profile_name\n\n get_profile_node:\n workflow: tripleo.baremetal.v1.nodes_with_profile\n input:\n profile: <% $.profile_name %>\n publish:\n profile_node_uuid: <% task().result.matching_nodes.first('') %>\n on-success:\n - get_introspection_data: <% $.profile_node_uuid %>\n - set_status_failed_no_matching_node_get_profile_node: <% not $.profile_node_uuid %>\n on-error: set_status_failed_on_error_get_profile_node\n\n get_introspection_data:\n action: baremetal_introspection.get_data uuid=<% $.profile_node_uuid %>\n publish:\n hw_data: <% task().result %>\n # Establish an empty dictionary of derived_parameters prior to\n # invoking the individual \"feature\" algorithms\n derived_parameters: <% dict() %>\n on-success: handle_dpdk_feature\n on-error: set_status_failed_get_introspection_data\n\n handle_dpdk_feature:\n on-success:\n - get_dpdk_derive_params: <% $.role_features.contains('DPDK') %>\n - handle_sriov_feature: <% not $.role_features.contains('DPDK') %>\n\n get_dpdk_derive_params:\n workflow: tripleo.derive_params_formulas.v1.dpdk_derive_params\n input:\n plan: <% $.plan %>\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n user_inputs: <% $.user_inputs %>\n publish:\n derived_parameters: <% task().result.get('derived_parameters', {}) %>\n on-success: handle_sriov_feature\n on-error: set_status_failed_get_dpdk_derive_params\n\n handle_sriov_feature:\n on-success:\n - get_sriov_derive_params: <% $.role_features.contains('SRIOV') %>\n - handle_host_feature: <% not $.role_features.contains('SRIOV') %>\n\n get_sriov_derive_params:\n workflow: tripleo.derive_params_formulas.v1.sriov_derive_params\n input:\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n derived_parameters: <% $.derived_parameters %>\n publish:\n derived_parameters: <% task().result.get('derived_parameters', {}) %>\n on-success: handle_host_feature\n on-error: set_status_failed_get_sriov_derive_params\n\n handle_host_feature:\n on-success:\n - get_host_derive_params: <% $.role_features.contains('HOST') %>\n - handle_hci_feature: <% not $.role_features.contains('HOST') %>\n\n get_host_derive_params:\n workflow: tripleo.derive_params_formulas.v1.host_derive_params\n input:\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n user_inputs: <% $.user_inputs %>\n derived_parameters: <% $.derived_parameters %>\n publish:\n derived_parameters: <% task().result.get('derived_parameters', {}) %>\n on-success: handle_hci_feature\n on-error: set_status_failed_get_host_derive_params\n\n handle_hci_feature:\n on-success:\n - get_hci_derive_params: <% $.role_features.contains('HCI') %>\n\n get_hci_derive_params:\n workflow: tripleo.derive_params_formulas.v1.hci_derive_params\n input:\n role_name: <% $.role_name %>\n environment_parameters: <% $.environment_parameters %>\n heat_resource_tree: <% $.heat_resource_tree %>\n introspection_data: <% $.hw_data %>\n user_inputs: <% $.user_inputs %>\n derived_parameters: <% $.derived_parameters %>\n publish:\n derived_parameters: <% task().result.get('derived_parameters', {}) %>\n on-error: set_status_failed_get_hci_derive_params\n # Done (no more derived parameter features)\n\n set_status_failed_get_role_info:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_role_info).result.get('message', '') %>\n on-success: fail\n\n set_status_failed_get_flavor_name:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% \"Unable to determine flavor for role '{0}'\".format($.role_name) %>\n on-success: fail\n\n set_status_failed_get_profile_name:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_profile_name).result %>\n on-success: fail\n\n set_status_failed_no_matching_node_get_profile_node:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% \"Unable to determine matching node for profile '{0}'\".format($.profile_name) %>\n on-success: fail\n\n set_status_failed_on_error_get_profile_node:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_profile_node).result %>\n on-success: fail\n\n set_status_failed_get_introspection_data:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_introspection_data).result %>\n on-success: fail\n\n set_status_failed_get_dpdk_derive_params:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_dpdk_derive_params).result %>\n on-success: fail\n\n set_status_failed_get_sriov_derive_params:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_sriov_derive_params).result %>\n on-success: fail\n\n set_status_failed_get_host_derive_params:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_host_derive_params).result %>\n on-success: fail\n\n set_status_failed_get_hci_derive_params:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_hci_derive_params).result %>\n on-success: fail\n", "name": "tripleo.derive_params.v1._derive_parameters_per_role", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:47", "namespace": "", "updated_at": null, "scope": "private", "input": "plan, role_name, environment_parameters, heat_resource_tree, user_inputs", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "8797ad84-a92d-46d8-b990-2859c85a5779"}, {"definition": "create_swift_rings_backup_container_plan:\n description: >\n This plan ensures existence of container for Swift Rings backup.\n input:\n - container\n - queue_name: tripleo\n tags:\n - tripleo-common-managed\n tasks:\n\n swift_rings_container:\n publish:\n swift_rings_container: \"<% $.container %>-swift-rings\"\n swift_rings_tar: \"swift-rings.tar.gz\"\n on-complete: check_container\n\n check_container:\n action: swift.head_container container=<% $.swift_rings_container %>\n on-success: get_tempurl\n on-error: create_container\n\n create_container:\n action: swift.put_container container=<% $.swift_rings_container %>\n on-error: set_create_container_failed\n on-success: get_tempurl\n\n get_tempurl:\n action: tripleo.swift.tempurl\n on-success: set_get_tempurl\n input:\n container: <% $.swift_rings_container %>\n obj: <% $.swift_rings_tar %>\n\n set_get_tempurl:\n action: tripleo.parameters.update\n input:\n parameters:\n SwiftRingGetTempurl: <% task(get_tempurl).result %>\n container: <% $.container %>\n on-success: put_tempurl\n\n put_tempurl:\n action: tripleo.swift.tempurl\n on-success: set_put_tempurl\n input:\n container: <% $.swift_rings_container %>\n obj: <% $.swift_rings_tar %>\n method: \"PUT\"\n\n set_put_tempurl:\n action: tripleo.parameters.update\n input:\n parameters:\n SwiftRingPutTempurl: <% task(put_tempurl).result %>\n container: <% $.container %>\n on-success: set_status_success\n on-error: set_put_tempurl_failed\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(set_put_tempurl).result %>\n\n set_put_tempurl_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(set_put_tempurl).result %>\n\n set_create_container_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_container).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:47", "namespace": "", "updated_at": null, "scope": "private", "input": "container, queue_name=tripleo", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "9ed3ff96-d256-4401-927f-612421dc303b"}, {"definition": "_get_role_info:\n description: >\n Workflow that determines the list of derived parameter features (DPDK,\n HCI, etc.) for a role based on the services assigned to the role.\n\n input:\n - role_name\n - heat_resource_tree\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_resource_chains:\n publish:\n resource_chains: <% $.heat_resource_tree.resources.values().where($.get('type', '') = 'OS::Heat::ResourceChain') %>\n on-success:\n - get_role_chain: <% $.resource_chains %>\n - set_status_failed_get_resource_chains: <% not $.resource_chains %>\n\n get_role_chain:\n publish:\n role_chain: <% let(chain_name => concat($.role_name, 'ServiceChain'))-> $.heat_resource_tree.resources.values().where($.name = $chain_name).first({}) %>\n on-success:\n - get_service_chain: <% $.role_chain %>\n - set_status_failed_get_role_chain: <% not $.role_chain %>\n\n get_service_chain:\n publish:\n service_chain: <% let(resources => $.role_chain.resources)-> $.resource_chains.where($resources.contains($.id)).first('') %>\n on-success:\n - get_role_services: <% $.service_chain %>\n - set_status_failed_get_service_chain: <% not $.service_chain %>\n\n get_role_services:\n publish:\n role_services: <% let(resources => $.heat_resource_tree.resources)-> $.service_chain.resources.select($resources.get($)) %>\n on-success:\n - check_features: <% $.role_services %>\n - set_status_failed_get_role_services: <% not $.role_services %>\n\n check_features:\n on-success: build_feature_dict\n publish:\n # The role supports the DPDK feature if the NeutronDatapathType parameter is present\n dpdk: <% let(resources => $.heat_resource_tree.resources) -> $.role_services.any($.get('parameters', []).contains('NeutronDatapathType') or $.get('resources', []).select($resources.get($)).any($.get('parameters', []).contains('NeutronDatapathType'))) %>\n\n # The role supports the DPDK feature in ODL if the OvsEnableDpdk parameter value is true in role parameters.\n odl_dpdk: <% let(role => $.role_name) -> $.heat_resource_tree.parameters.get(concat($role, 'Parameters'), {}).get('default', {}).get('OvsEnableDpdk', false) %>\n\n # The role supports the SRIOV feature if it includes NeutronSriovAgent services.\n sriov: <% $.role_services.any($.get('type', '').endsWith('::NeutronSriovAgent')) %>\n\n # The role supports the HCI feature if it includes both NovaCompute and CephOSD services.\n hci: <% $.role_services.any($.get('type', '').endsWith('::NovaCompute')) and $.role_services.any($.get('type', '').endsWith('::CephOSD')) %>\n\n build_feature_dict:\n on-success: filter_features\n publish:\n feature_dict: <% dict(DPDK => ($.dpdk or $.odl_dpdk), SRIOV => $.sriov, HOST => ($.dpdk or $.odl_dpdk or $.sriov), HCI => $.hci) %>\n\n filter_features:\n publish:\n # The list of features that are enabled (i.e. are true in the feature_dict).\n role_features: <% let(feature_dict => $.feature_dict)-> $feature_dict.keys().where($feature_dict[$]) %>\n\n set_status_failed_get_resource_chains:\n publish:\n message: <% 'Unable to locate any resource chains in the heat resource tree' %>\n on-success: fail\n\n set_status_failed_get_role_chain:\n publish:\n message: <% \"Unable to determine the service chain resource for role '{0}'\".format($.role_name) %>\n on-success: fail\n\n set_status_failed_get_service_chain:\n publish:\n message: <% \"Unable to determine the service chain for role '{0}'\".format($.role_name) %>\n on-success: fail\n\n set_status_failed_get_role_services:\n publish:\n message: <% \"Unable to determine list of services for role '{0}'\".format($.role_name) %>\n on-success: fail\n", "name": "tripleo.derive_params.v1._get_role_info", "tags": ["tripleo-common-managed"], "created_at": "2018-06-26 04:26:47", "namespace": "", "updated_at": null, "scope": "private", "input": "role_name, heat_resource_tree", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "ca5c0bc8-9f2a-4cc5-b702-e43b9fc3ea4d"}, {"definition": "---\nversion: \"2.0\"\n\nstd.delete_instance:\n type: direct\n\n input:\n - instance_id\n\n description: Deletes VM.\n\n tasks:\n delete_vm:\n description: Destroy VM.\n action: nova.servers_delete server=<% $.instance_id %>\n wait-after: 10\n on-success:\n - find_given_vm\n\n find_given_vm:\n description: Checks that VM is already deleted.\n action: nova.servers_find id=<% $.instance_id %>\n on-error:\n - succeed\n\n", "name": "std.delete_instance", "tags": [], "created_at": "2018-06-26 05:43:59", "namespace": "", "updated_at": null, "scope": "public", "input": "instance_id", "project_id": "<default-project>", "id": "a542cf4e-5d6b-4e91-8426-4aa58f1cb210"}, {"definition": "---\nversion: '2.0'\n\nstd.create_instance:\n type: direct\n\n description: |\n Creates VM and waits till VM OS is up and running.\n\n input:\n - name\n - image_id\n - flavor_id\n - ssh_username: null\n - ssh_password: null\n\n # Name of previously created keypair to inject into the instance.\n # Either ssh credentials or keypair must be provided.\n - key_name: null\n\n # Security_groups: A list of security group names\n - security_groups: null\n\n # An ordered list of nics to be added to this server, with information about connected networks, fixed IPs, port etc.\n # Example: nics: [{\"net-id\": \"27aa8c1c-d6b8-4474-b7f7-6cdcf63ac856\"}]\n - nics: null\n\n task-defaults:\n on-error:\n - delete_vm\n\n output:\n ip: <% $.vm_ip %>\n id: <% $.vm_id %>\n name: <% $.name %>\n status: <% $.status %>\n\n tasks:\n create_vm:\n description: Initial request to create a VM.\n action: nova.servers_create name=<% $.name %> image=<% $.image_id %> flavor=<% $.flavor_id %>\n input:\n key_name: <% $.key_name %>\n security_groups: <% $.security_groups %>\n nics: <% $.nics %>\n publish:\n vm_id: <% task(create_vm).result.id %>\n on-success:\n - search_for_ip\n\n search_for_ip:\n description: Gets first free ip from Nova floating IPs.\n action: nova.floating_ips_findall instance_id=null\n publish:\n vm_ip: <% task(search_for_ip).result[0].ip %>\n on-success:\n - wait_vm_active\n\n wait_vm_active:\n description: Waits till VM is ACTIVE.\n action: nova.servers_find id=<% $.vm_id %> status=\"ACTIVE\"\n retry:\n count: 10\n delay: 10\n publish:\n status: <% task(wait_vm_active).result.status %>\n on-success:\n - associate_ip\n\n associate_ip:\n description: Associate server with one of floating IPs.\n action: nova.servers_add_floating_ip server=<% $.vm_id %> address=<% $.vm_ip %>\n wait-after: 5\n on-success:\n - wait_ssh\n\n wait_ssh:\n description: Wait till operating system on the VM is up (SSH command).\n action: std.wait_ssh username=<% $.ssh_username %> password=<% $.ssh_password %> host=<% $.vm_ip %>\n retry:\n count: 10\n delay: 10\n\n delete_vm:\n description: Destroy VM.\n workflow: std.delete_instance instance_id=<% $.vm_id %>\n on-complete:\n - fail\n", "name": "std.create_instance", "tags": [], "created_at": "2018-06-26 05:43:59", "namespace": "", "updated_at": null, "scope": "public", "input": "name, image_id, flavor_id, ssh_username=None, ssh_password=None, key_name=None, security_groups=None, nics=None", "project_id": "<default-project>", "id": "c96cb39c-e6e7-4aa3-bfae-f2a7c4d39224"}]} > >2018-06-26 11:15:09,206 DEBUG: HTTP GET http://192.0.3.1:8989/v2/workflows 200 >2018-06-26 11:15:09,209 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/cron_triggers -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,223 DEBUG: http://192.0.3.1:8989 "GET /v2/cron_triggers HTTP/1.1" 200 21 >2018-06-26 11:15:09,223 DEBUG: RESP: [200] Content-Length: 21 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: {"cron_triggers": []} > >2018-06-26 11:15:09,223 DEBUG: HTTP GET http://192.0.3.1:8989/v2/cron_triggers 200 >2018-06-26 11:15:09,223 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.access.v1.create_admin_via_nova -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,241 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.access.v1.create_admin_via_nova HTTP/1.1" 204 0 >2018-06-26 11:15:09,241 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,242 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.access.v1.create_admin_via_nova 204 >2018-06-26 11:15:09,242 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.access.v1.enable_ssh_admin -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,256 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.access.v1.enable_ssh_admin HTTP/1.1" 204 0 >2018-06-26 11:15:09,257 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,257 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.access.v1.enable_ssh_admin 204 >2018-06-26 11:15:09,257 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.access.v1.create_admin_via_ssh -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,271 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.access.v1.create_admin_via_ssh HTTP/1.1" 204 0 >2018-06-26 11:15:09,272 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,272 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.access.v1.create_admin_via_ssh 204 >2018-06-26 11:15:09,272 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.stack.v1.wait_for_stack_does_not_exist -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,286 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.stack.v1.wait_for_stack_does_not_exist HTTP/1.1" 204 0 >2018-06-26 11:15:09,286 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,286 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.stack.v1.wait_for_stack_does_not_exist 204 >2018-06-26 11:15:09,286 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.stack.v1.delete_stack -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,300 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.stack.v1.delete_stack HTTP/1.1" 204 0 >2018-06-26 11:15:09,301 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,301 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.stack.v1.delete_stack 204 >2018-06-26 11:15:09,301 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.stack.v1.wait_for_stack_in_progress -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,315 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.stack.v1.wait_for_stack_in_progress HTTP/1.1" 204 0 >2018-06-26 11:15:09,316 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,316 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.stack.v1.wait_for_stack_in_progress 204 >2018-06-26 11:15:09,316 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.stack.v1.wait_for_stack_complete_or_failed -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,331 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.stack.v1.wait_for_stack_complete_or_failed HTTP/1.1" 204 0 >2018-06-26 11:15:09,331 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,332 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.stack.v1.wait_for_stack_complete_or_failed 204 >2018-06-26 11:15:09,332 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.run_validations -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,347 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.validations.v1.run_validations HTTP/1.1" 204 0 >2018-06-26 11:15:09,347 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,347 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.run_validations 204 >2018-06-26 11:15:09,348 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.check_pre_deployment_validations -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,363 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.validations.v1.check_pre_deployment_validations HTTP/1.1" 204 0 >2018-06-26 11:15:09,363 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,363 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.check_pre_deployment_validations 204 >2018-06-26 11:15:09,364 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.check_ironic_boot_configuration -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,379 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.validations.v1.check_ironic_boot_configuration HTTP/1.1" 204 0 >2018-06-26 11:15:09,379 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,379 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.check_ironic_boot_configuration 204 >2018-06-26 11:15:09,379 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.verify_profiles -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,394 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.validations.v1.verify_profiles HTTP/1.1" 204 0 >2018-06-26 11:15:09,394 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,394 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.verify_profiles 204 >2018-06-26 11:15:09,395 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.list_groups -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,409 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.validations.v1.list_groups HTTP/1.1" 204 0 >2018-06-26 11:15:09,410 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,410 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.list_groups 204 >2018-06-26 11:15:09,410 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.run_groups -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,425 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.validations.v1.run_groups HTTP/1.1" 204 0 >2018-06-26 11:15:09,425 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,425 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.run_groups 204 >2018-06-26 11:15:09,425 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.collect_flavors -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,441 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.validations.v1.collect_flavors HTTP/1.1" 204 0 >2018-06-26 11:15:09,442 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,442 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.collect_flavors 204 >2018-06-26 11:15:09,442 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.check_boot_images -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,456 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.validations.v1.check_boot_images HTTP/1.1" 204 0 >2018-06-26 11:15:09,456 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,457 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.check_boot_images 204 >2018-06-26 11:15:09,457 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.run_validation -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,471 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.validations.v1.run_validation HTTP/1.1" 204 0 >2018-06-26 11:15:09,471 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,471 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.run_validation 204 >2018-06-26 11:15:09,472 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.copy_ssh_key -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,485 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.validations.v1.copy_ssh_key HTTP/1.1" 204 0 >2018-06-26 11:15:09,486 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,486 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.copy_ssh_key 204 >2018-06-26 11:15:09,486 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.add_validation_ssh_key_parameter -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,500 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.validations.v1.add_validation_ssh_key_parameter HTTP/1.1" 204 0 >2018-06-26 11:15:09,500 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,500 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.add_validation_ssh_key_parameter 204 >2018-06-26 11:15:09,501 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.list -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,514 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.validations.v1.list HTTP/1.1" 204 0 >2018-06-26 11:15:09,515 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,515 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.list 204 >2018-06-26 11:15:09,515 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.check_default_nodes_count -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,529 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.validations.v1.check_default_nodes_count HTTP/1.1" 204 0 >2018-06-26 11:15:09,530 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,530 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.validations.v1.check_default_nodes_count 204 >2018-06-26 11:15:09,530 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.derive_params_formulas.v1.get_host_cpus -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,544 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.derive_params_formulas.v1.get_host_cpus HTTP/1.1" 204 0 >2018-06-26 11:15:09,544 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,544 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.derive_params_formulas.v1.get_host_cpus 204 >2018-06-26 11:15:09,544 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.derive_params_formulas.v1.dpdk_derive_params -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,559 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.derive_params_formulas.v1.dpdk_derive_params HTTP/1.1" 204 0 >2018-06-26 11:15:09,559 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,559 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.derive_params_formulas.v1.dpdk_derive_params 204 >2018-06-26 11:15:09,559 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.derive_params_formulas.v1.hci_derive_params -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,573 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.derive_params_formulas.v1.hci_derive_params HTTP/1.1" 204 0 >2018-06-26 11:15:09,574 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,574 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.derive_params_formulas.v1.hci_derive_params 204 >2018-06-26 11:15:09,574 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.derive_params_formulas.v1.sriov_derive_params -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,588 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.derive_params_formulas.v1.sriov_derive_params HTTP/1.1" 204 0 >2018-06-26 11:15:09,589 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,589 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.derive_params_formulas.v1.sriov_derive_params 204 >2018-06-26 11:15:09,589 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.derive_params_formulas.v1.host_derive_params -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,603 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.derive_params_formulas.v1.host_derive_params HTTP/1.1" 204 0 >2018-06-26 11:15:09,603 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,604 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.derive_params_formulas.v1.host_derive_params 204 >2018-06-26 11:15:09,604 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.validate_networks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,619 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.validate_networks HTTP/1.1" 204 0 >2018-06-26 11:15:09,619 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,619 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.validate_networks 204 >2018-06-26 11:15:09,619 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.update_deployment_plan -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,633 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.update_deployment_plan HTTP/1.1" 204 0 >2018-06-26 11:15:09,634 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,634 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.update_deployment_plan 204 >2018-06-26 11:15:09,634 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1._validate_networks_from_roles -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,647 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1._validate_networks_from_roles HTTP/1.1" 204 0 >2018-06-26 11:15:09,648 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,648 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1._validate_networks_from_roles 204 >2018-06-26 11:15:09,648 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.list_available_roles -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,662 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.list_available_roles HTTP/1.1" 204 0 >2018-06-26 11:15:09,662 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,662 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.list_available_roles 204 >2018-06-26 11:15:09,662 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.list_roles -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,676 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.list_roles HTTP/1.1" 204 0 >2018-06-26 11:15:09,677 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,677 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.list_roles 204 >2018-06-26 11:15:09,677 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.get_deprecated_parameters -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,691 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.get_deprecated_parameters HTTP/1.1" 204 0 >2018-06-26 11:15:09,691 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,691 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.get_deprecated_parameters 204 >2018-06-26 11:15:09,691 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.validate_roles -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,705 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.validate_roles HTTP/1.1" 204 0 >2018-06-26 11:15:09,706 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,706 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.validate_roles 204 >2018-06-26 11:15:09,706 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.download_logs -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,720 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.download_logs HTTP/1.1" 204 0 >2018-06-26 11:15:09,720 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,720 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.download_logs 204 >2018-06-26 11:15:09,721 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.update_roles -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,735 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.update_roles HTTP/1.1" 204 0 >2018-06-26 11:15:09,735 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,735 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.update_roles 204 >2018-06-26 11:15:09,736 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.delete_deployment_plan -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,749 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.delete_deployment_plan HTTP/1.1" 204 0 >2018-06-26 11:15:09,750 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,750 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.delete_deployment_plan 204 >2018-06-26 11:15:09,750 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.validate_network_files -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,764 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.validate_network_files HTTP/1.1" 204 0 >2018-06-26 11:15:09,764 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,764 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.validate_network_files 204 >2018-06-26 11:15:09,765 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.list_networks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,779 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.list_networks HTTP/1.1" 204 0 >2018-06-26 11:15:09,779 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,779 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.list_networks 204 >2018-06-26 11:15:09,779 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.list_available_networks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,794 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.list_available_networks HTTP/1.1" 204 0 >2018-06-26 11:15:09,794 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,794 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.list_available_networks 204 >2018-06-26 11:15:09,795 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.export_deployment_plan -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,809 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.export_deployment_plan HTTP/1.1" 204 0 >2018-06-26 11:15:09,809 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,809 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.export_deployment_plan 204 >2018-06-26 11:15:09,809 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.validate_roles_and_networks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,824 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.validate_roles_and_networks HTTP/1.1" 204 0 >2018-06-26 11:15:09,824 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,824 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.validate_roles_and_networks 204 >2018-06-26 11:15:09,824 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.get_passwords -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,840 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.get_passwords HTTP/1.1" 204 0 >2018-06-26 11:15:09,840 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,840 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.get_passwords 204 >2018-06-26 11:15:09,841 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.select_roles -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,856 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.select_roles HTTP/1.1" 204 0 >2018-06-26 11:15:09,857 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,857 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.select_roles 204 >2018-06-26 11:15:09,857 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.create_deployment_plan -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,873 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.create_deployment_plan HTTP/1.1" 204 0 >2018-06-26 11:15:09,873 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,873 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.create_deployment_plan 204 >2018-06-26 11:15:09,873 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.create_default_deployment_plan -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,888 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.create_default_deployment_plan HTTP/1.1" 204 0 >2018-06-26 11:15:09,889 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,889 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.create_default_deployment_plan 204 >2018-06-26 11:15:09,889 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.publish_ui_logs_to_swift -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,903 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.plan_management.v1.publish_ui_logs_to_swift HTTP/1.1" 204 0 >2018-06-26 11:15:09,904 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,904 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.plan_management.v1.publish_ui_logs_to_swift 204 >2018-06-26 11:15:09,904 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.support.v1.upload_logs -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,918 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.support.v1.upload_logs HTTP/1.1" 204 0 >2018-06-26 11:15:09,919 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,919 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.support.v1.upload_logs 204 >2018-06-26 11:15:09,919 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.support.v1.fetch_logs -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,933 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.support.v1.fetch_logs HTTP/1.1" 204 0 >2018-06-26 11:15:09,934 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,934 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.support.v1.fetch_logs 204 >2018-06-26 11:15:09,934 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.support.v1.collect_logs -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,948 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.support.v1.collect_logs HTTP/1.1" 204 0 >2018-06-26 11:15:09,948 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,948 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.support.v1.collect_logs 204 >2018-06-26 11:15:09,949 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.support.v1.create_container -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,963 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.support.v1.create_container HTTP/1.1" 204 0 >2018-06-26 11:15:09,963 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,963 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.support.v1.create_container 204 >2018-06-26 11:15:09,964 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.support.v1.delete_container -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,979 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.support.v1.delete_container HTTP/1.1" 204 0 >2018-06-26 11:15:09,979 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,979 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.support.v1.delete_container 204 >2018-06-26 11:15:09,980 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.deployment.v1.config_download_deploy -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:09,994 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.deployment.v1.config_download_deploy HTTP/1.1" 204 0 >2018-06-26 11:15:09,994 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:09 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:09,994 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.deployment.v1.config_download_deploy 204 >2018-06-26 11:15:09,994 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.deployment.v1.get_horizon_url -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,008 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.deployment.v1.get_horizon_url HTTP/1.1" 204 0 >2018-06-26 11:15:10,009 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,009 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.deployment.v1.get_horizon_url 204 >2018-06-26 11:15:10,009 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.deployment.v1.deploy_on_server -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,023 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.deployment.v1.deploy_on_server HTTP/1.1" 204 0 >2018-06-26 11:15:10,024 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,024 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.deployment.v1.deploy_on_server 204 >2018-06-26 11:15:10,024 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.deployment.v1.deploy_on_servers -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,038 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.deployment.v1.deploy_on_servers HTTP/1.1" 204 0 >2018-06-26 11:15:10,038 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,039 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.deployment.v1.deploy_on_servers 204 >2018-06-26 11:15:10,039 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.deployment.v1.deploy_plan -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,053 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.deployment.v1.deploy_plan HTTP/1.1" 204 0 >2018-06-26 11:15:10,053 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,053 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.deployment.v1.deploy_plan 204 >2018-06-26 11:15:10,054 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.set_node_state -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,068 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.set_node_state HTTP/1.1" 204 0 >2018-06-26 11:15:10,069 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,069 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.set_node_state 204 >2018-06-26 11:15:10,069 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.discover_and_enroll_nodes -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,102 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.discover_and_enroll_nodes HTTP/1.1" 204 0 >2018-06-26 11:15:10,103 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,103 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.discover_and_enroll_nodes 204 >2018-06-26 11:15:10,103 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.tag_node -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,131 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.tag_node HTTP/1.1" 204 0 >2018-06-26 11:15:10,132 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,132 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.tag_node 204 >2018-06-26 11:15:10,132 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.set_power_state -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,147 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.set_power_state HTTP/1.1" 204 0 >2018-06-26 11:15:10,147 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,147 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.set_power_state 204 >2018-06-26 11:15:10,148 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.nodes_with_profile -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,163 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.nodes_with_profile HTTP/1.1" 204 0 >2018-06-26 11:15:10,163 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,163 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.nodes_with_profile 204 >2018-06-26 11:15:10,163 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.octavia_post.v1.octavia_post_deploy -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,179 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.octavia_post.v1.octavia_post_deploy HTTP/1.1" 204 0 >2018-06-26 11:15:10,179 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,179 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.octavia_post.v1.octavia_post_deploy 204 >2018-06-26 11:15:10,180 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.storage.v1.ceph-install -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,197 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.storage.v1.ceph-install HTTP/1.1" 204 0 >2018-06-26 11:15:10,197 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,197 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.storage.v1.ceph-install 204 >2018-06-26 11:15:10,197 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.provide_manageable_nodes -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,211 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.provide_manageable_nodes HTTP/1.1" 204 0 >2018-06-26 11:15:10,212 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,212 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.provide_manageable_nodes 204 >2018-06-26 11:15:10,212 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.scale.v1.delete_node -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,226 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.scale.v1.delete_node HTTP/1.1" 204 0 >2018-06-26 11:15:10,227 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,227 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.scale.v1.delete_node 204 >2018-06-26 11:15:10,227 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.create_raid_configuration -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,241 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.create_raid_configuration HTTP/1.1" 204 0 >2018-06-26 11:15:10,241 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,241 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.create_raid_configuration 204 >2018-06-26 11:15:10,241 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.discover_nodes -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,256 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.discover_nodes HTTP/1.1" 204 0 >2018-06-26 11:15:10,256 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,256 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.discover_nodes 204 >2018-06-26 11:15:10,256 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.register_or_update -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,271 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.register_or_update HTTP/1.1" 204 0 >2018-06-26 11:15:10,271 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,271 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.register_or_update 204 >2018-06-26 11:15:10,271 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.cellv2_discovery -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,285 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.cellv2_discovery HTTP/1.1" 204 0 >2018-06-26 11:15:10,286 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,286 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.cellv2_discovery 204 >2018-06-26 11:15:10,286 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.configure_manageable_nodes -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,300 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.configure_manageable_nodes HTTP/1.1" 204 0 >2018-06-26 11:15:10,300 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,300 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.configure_manageable_nodes 204 >2018-06-26 11:15:10,301 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.manage -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,315 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.manage HTTP/1.1" 204 0 >2018-06-26 11:15:10,315 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,315 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.manage 204 >2018-06-26 11:15:10,315 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.provide -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,330 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.provide HTTP/1.1" 204 0 >2018-06-26 11:15:10,331 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,331 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.provide 204 >2018-06-26 11:15:10,331 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.introspect_manageable_nodes -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,346 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.introspect_manageable_nodes HTTP/1.1" 204 0 >2018-06-26 11:15:10,347 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,347 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.introspect_manageable_nodes 204 >2018-06-26 11:15:10,347 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.validate_nodes -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,362 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.validate_nodes HTTP/1.1" 204 0 >2018-06-26 11:15:10,362 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,362 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.validate_nodes 204 >2018-06-26 11:15:10,363 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1._introspect -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,378 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1._introspect HTTP/1.1" 204 0 >2018-06-26 11:15:10,379 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,379 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1._introspect 204 >2018-06-26 11:15:10,379 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.configure -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,394 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.configure HTTP/1.1" 204 0 >2018-06-26 11:15:10,394 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,394 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.configure 204 >2018-06-26 11:15:10,395 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.manual_cleaning -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,408 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.manual_cleaning HTTP/1.1" 204 0 >2018-06-26 11:15:10,409 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,409 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.manual_cleaning 204 >2018-06-26 11:15:10,409 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.introspect -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,423 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.introspect HTTP/1.1" 204 0 >2018-06-26 11:15:10,424 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,424 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.introspect 204 >2018-06-26 11:15:10,424 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.tag_nodes -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,438 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.baremetal.v1.tag_nodes HTTP/1.1" 204 0 >2018-06-26 11:15:10,438 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,438 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.baremetal.v1.tag_nodes 204 >2018-06-26 11:15:10,439 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.networks.v1.validate_networks_input -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,452 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.networks.v1.validate_networks_input HTTP/1.1" 204 0 >2018-06-26 11:15:10,453 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,453 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.networks.v1.validate_networks_input 204 >2018-06-26 11:15:10,453 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.fernet_keys.v1.rotate_fernet_keys -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,467 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.fernet_keys.v1.rotate_fernet_keys HTTP/1.1" 204 0 >2018-06-26 11:15:10,467 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,467 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.fernet_keys.v1.rotate_fernet_keys 204 >2018-06-26 11:15:10,467 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.swift_ring.v1.rebalance -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,481 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.swift_ring.v1.rebalance HTTP/1.1" 204 0 >2018-06-26 11:15:10,481 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,481 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.swift_ring.v1.rebalance 204 >2018-06-26 11:15:10,482 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.networks.v1.update_networks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,495 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.networks.v1.update_networks HTTP/1.1" 204 0 >2018-06-26 11:15:10,496 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,496 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.networks.v1.update_networks 204 >2018-06-26 11:15:10,496 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.package_update.v1.package_update_plan -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,510 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.package_update.v1.package_update_plan HTTP/1.1" 204 0 >2018-06-26 11:15:10,510 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,510 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.package_update.v1.package_update_plan 204 >2018-06-26 11:15:10,511 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.skydive_ansible.v1.skydive_install -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,525 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.skydive_ansible.v1.skydive_install HTTP/1.1" 204 0 >2018-06-26 11:15:10,525 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,525 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.skydive_ansible.v1.skydive_install 204 >2018-06-26 11:15:10,525 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.package_update.v1.get_config -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,539 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.package_update.v1.get_config HTTP/1.1" 204 0 >2018-06-26 11:15:10,540 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,540 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.package_update.v1.get_config 204 >2018-06-26 11:15:10,540 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.package_update.v1.ffwd_upgrade_converge_plan -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,555 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.package_update.v1.ffwd_upgrade_converge_plan HTTP/1.1" 204 0 >2018-06-26 11:15:10,555 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,555 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.package_update.v1.ffwd_upgrade_converge_plan 204 >2018-06-26 11:15:10,555 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.package_update.v1.converge_upgrade_plan -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,569 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.package_update.v1.converge_upgrade_plan HTTP/1.1" 204 0 >2018-06-26 11:15:10,570 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,570 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.package_update.v1.converge_upgrade_plan 204 >2018-06-26 11:15:10,570 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.package_update.v1.update_converge_plan -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,584 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.package_update.v1.update_converge_plan HTTP/1.1" 204 0 >2018-06-26 11:15:10,584 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,584 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.package_update.v1.update_converge_plan 204 >2018-06-26 11:15:10,585 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.package_update.v1.update_nodes -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,598 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.package_update.v1.update_nodes HTTP/1.1" 204 0 >2018-06-26 11:15:10,599 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,599 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.package_update.v1.update_nodes 204 >2018-06-26 11:15:10,599 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.undercloud_backup.v1.backup -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,613 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.undercloud_backup.v1.backup HTTP/1.1" 204 0 >2018-06-26 11:15:10,614 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,614 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.undercloud_backup.v1.backup 204 >2018-06-26 11:15:10,614 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.derive_params.v1.derive_parameters -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,628 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.derive_params.v1.derive_parameters HTTP/1.1" 204 0 >2018-06-26 11:15:10,628 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,628 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.derive_params.v1.derive_parameters 204 >2018-06-26 11:15:10,629 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.derive_params.v1._derive_parameters_per_role -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,643 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.derive_params.v1._derive_parameters_per_role HTTP/1.1" 204 0 >2018-06-26 11:15:10,643 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,643 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.derive_params.v1._derive_parameters_per_role 204 >2018-06-26 11:15:10,644 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,657 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan HTTP/1.1" 204 0 >2018-06-26 11:15:10,658 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,658 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan 204 >2018-06-26 11:15:10,658 DEBUG: REQ: curl -g -i -X DELETE http://192.0.3.1:8989/v2/workflows/tripleo.derive_params.v1._get_role_info -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:10,672 DEBUG: http://192.0.3.1:8989 "DELETE /v2/workflows/tripleo.derive_params.v1._get_role_info HTTP/1.1" 204 0 >2018-06-26 11:15:10,672 DEBUG: RESP: [204] Content-Length: 0 Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: Omitted, Content-Type is set to None. Only application/json responses have their bodies logged. > >2018-06-26 11:15:10,672 DEBUG: HTTP DELETE http://192.0.3.1:8989/v2/workflows/tripleo.derive_params.v1._get_role_info 204 >2018-06-26 11:15:10,677 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.access.v1 >description: TripleO administration access workflows > >workflows: > > enable_ssh_admin: > description: >- > This workflow creates an admin user on the overcloud nodes, > which can then be used for connecting for automated > administrative or deployment tasks, e.g. via Ansible. The > workflow can be used both for Nova-managed and split-stack > deployments, assuming the correct input values are passed > in. The workflow defaults to Nova-managed approach, for which no > additional parameters need to be supplied. In case of > split-stack, temporary ssh connection details (user, key, list > of servers) need to be provided -- these are only used > temporarily to create the actual ssh admin user for use by > Mistral. > tags: > - tripleo-common-managed > input: > - ssh_private_key: null > - ssh_user: null > - ssh_servers: [] > - overcloud_admin: tripleo-admin > - queue_name: tripleo > tasks: > get_pubkey: > action: tripleo.validations.get_pubkey > on-success: generate_playbook > publish: > pubkey: <% task().result %> > > generate_playbook: > on-success: > - create_admin_via_nova: <% $.ssh_private_key = null %> > - create_admin_via_ssh: <% $.ssh_private_key != null %> > publish: > create_admin_tasks: > - name: create user <% $.overcloud_admin %> > user: > name: '<% $.overcloud_admin %>' > - name: grant admin rights to user <% $.overcloud_admin %> > copy: > dest: /etc/sudoers.d/<% $.overcloud_admin %> > content: | > <% $.overcloud_admin %> ALL=(ALL) NOPASSWD:ALL > mode: 0440 > - name: ensure .ssh dir exists for user <% $.overcloud_admin %> > file: > path: /home/<% $.overcloud_admin %>/.ssh > state: directory > owner: <% $.overcloud_admin %> > group: <% $.overcloud_admin %> > mode: 0700 > - name: ensure authorized_keys file exists for user <% $.overcloud_admin %> > file: > path: /home/<% $.overcloud_admin %>/.ssh/authorized_keys > state: touch > owner: <% $.overcloud_admin %> > group: <% $.overcloud_admin %> > mode: 0700 > - name: authorize TripleO Mistral key for user <% $.overcloud_admin %> > lineinfile: > path: /home/<% $.overcloud_admin %>/.ssh/authorized_keys > line: <% $.pubkey %> > regexp: "Generated by TripleO" > > # Nova variant > create_admin_via_nova: > workflow: tripleo.access.v1.create_admin_via_nova > input: > queue_name: <% $.queue_name %> > ssh_servers: <% $.ssh_servers %> > tasks: <% $.create_admin_tasks %> > overcloud_admin: <% $.overcloud_admin %> > > # SSH variant > create_admin_via_ssh: > workflow: tripleo.access.v1.create_admin_via_ssh > input: > ssh_private_key: <% $.ssh_private_key %> > ssh_user: <% $.ssh_user %> > ssh_servers: <% $.ssh_servers %> > tasks: <% $.create_admin_tasks %> > > create_admin_via_nova: > input: > - tasks > - queue_name: tripleo > - ssh_servers: [] > - overcloud_admin: tripleo-admin > - ansible_extra_env_variables: > ANSIBLE_HOST_KEY_CHECKING: 'False' > tags: > - tripleo-common-managed > tasks: > get_servers: > action: nova.servers_list > on-success: create_admin > publish: > servers: <% let(root => $) -> task().result._info.where($.addresses.ctlplane.addr.any($ in $root.ssh_servers)) %> > > create_admin: > workflow: tripleo.deployment.v1.deploy_on_server > on-success: get_privkey > with-items: server in <% $.servers %> > input: > server_name: <% $.server.name %> > server_uuid: <% $.server.id %> > queue_name: <% $.queue_name %> > config_name: create_admin > group: ansible > config: | > - hosts: localhost > connection: local > tasks: <% json_pp($.tasks) %> > > get_privkey: > action: tripleo.validations.get_privkey > on-success: wait_for_occ > publish: > privkey: <% task().result %> > > wait_for_occ: > action: tripleo.ansible-playbook > input: > inventory: > overcloud: > hosts: <% $.ssh_servers.toDict($, {}) %> > remote_user: <% $.overcloud_admin %> > ssh_private_key: <% $.privkey %> > extra_env_variables: <% $.ansible_extra_env_variables %> > playbook: > - hosts: overcloud > gather_facts: no > tasks: > - name: wait for connection > wait_for_connection: > sleep: 5 > timeout: 300 > > create_admin_via_ssh: > input: > - tasks > - ssh_private_key > - ssh_user > - ssh_servers > - ansible_extra_env_variables: > ANSIBLE_HOST_KEY_CHECKING: 'False' > > tags: > - tripleo-common-managed > tasks: > write_tmp_playbook: > action: tripleo.ansible-playbook > input: > inventory: > overcloud: > hosts: <% $.ssh_servers.toDict($, {}) %> > remote_user: <% $.ssh_user %> > ssh_private_key: <% $.ssh_private_key %> > extra_env_variables: <% $.ansible_extra_env_variables %> > become: true > become_user: root > playbook: > - hosts: overcloud > tasks: <% $.tasks %> >' >2018-06-26 11:15:10,886 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 6130 >2018-06-26 11:15:10,887 DEBUG: RESP: [201] Content-Length: 6130 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:10 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.access.v1\ndescription: TripleO administration access workflows\n\nworkflows:\n\n enable_ssh_admin:\n description: >-\n This workflow creates an admin user on the overcloud nodes,\n which can then be used for connecting for automated\n administrative or deployment tasks, e.g. via Ansible. The\n workflow can be used both for Nova-managed and split-stack\n deployments, assuming the correct input values are passed\n in. The workflow defaults to Nova-managed approach, for which no\n additional parameters need to be supplied. In case of\n split-stack, temporary ssh connection details (user, key, list\n of servers) need to be provided -- these are only used\n temporarily to create the actual ssh admin user for use by\n Mistral.\n tags:\n - tripleo-common-managed\n input:\n - ssh_private_key: null\n - ssh_user: null\n - ssh_servers: []\n - overcloud_admin: tripleo-admin\n - queue_name: tripleo\n tasks:\n get_pubkey:\n action: tripleo.validations.get_pubkey\n on-success: generate_playbook\n publish:\n pubkey: <% task().result %>\n\n generate_playbook:\n on-success:\n - create_admin_via_nova: <% $.ssh_private_key = null %>\n - create_admin_via_ssh: <% $.ssh_private_key != null %>\n publish:\n create_admin_tasks:\n - name: create user <% $.overcloud_admin %>\n user:\n name: '<% $.overcloud_admin %>'\n - name: grant admin rights to user <% $.overcloud_admin %>\n copy:\n dest: /etc/sudoers.d/<% $.overcloud_admin %>\n content: |\n <% $.overcloud_admin %> ALL=(ALL) NOPASSWD:ALL\n mode: 0440\n - name: ensure .ssh dir exists for user <% $.overcloud_admin %>\n file:\n path: /home/<% $.overcloud_admin %>/.ssh\n state: directory\n owner: <% $.overcloud_admin %>\n group: <% $.overcloud_admin %>\n mode: 0700\n - name: ensure authorized_keys file exists for user <% $.overcloud_admin %>\n file:\n path: /home/<% $.overcloud_admin %>/.ssh/authorized_keys\n state: touch\n owner: <% $.overcloud_admin %>\n group: <% $.overcloud_admin %>\n mode: 0700\n - name: authorize TripleO Mistral key for user <% $.overcloud_admin %>\n lineinfile:\n path: /home/<% $.overcloud_admin %>/.ssh/authorized_keys\n line: <% $.pubkey %>\n regexp: \"Generated by TripleO\"\n\n # Nova variant\n create_admin_via_nova:\n workflow: tripleo.access.v1.create_admin_via_nova\n input:\n queue_name: <% $.queue_name %>\n ssh_servers: <% $.ssh_servers %>\n tasks: <% $.create_admin_tasks %>\n overcloud_admin: <% $.overcloud_admin %>\n\n # SSH variant\n create_admin_via_ssh:\n workflow: tripleo.access.v1.create_admin_via_ssh\n input:\n ssh_private_key: <% $.ssh_private_key %>\n ssh_user: <% $.ssh_user %>\n ssh_servers: <% $.ssh_servers %>\n tasks: <% $.create_admin_tasks %>\n\n create_admin_via_nova:\n input:\n - tasks\n - queue_name: tripleo\n - ssh_servers: []\n - overcloud_admin: tripleo-admin\n - ansible_extra_env_variables:\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n tags:\n - tripleo-common-managed\n tasks:\n get_servers:\n action: nova.servers_list\n on-success: create_admin\n publish:\n servers: <% let(root => $) -> task().result._info.where($.addresses.ctlplane.addr.any($ in $root.ssh_servers)) %>\n\n create_admin:\n workflow: tripleo.deployment.v1.deploy_on_server\n on-success: get_privkey\n with-items: server in <% $.servers %>\n input:\n server_name: <% $.server.name %>\n server_uuid: <% $.server.id %>\n queue_name: <% $.queue_name %>\n config_name: create_admin\n group: ansible\n config: |\n - hosts: localhost\n connection: local\n tasks: <% json_pp($.tasks) %>\n\n get_privkey:\n action: tripleo.validations.get_privkey\n on-success: wait_for_occ\n publish:\n privkey: <% task().result %>\n\n wait_for_occ:\n action: tripleo.ansible-playbook\n input:\n inventory:\n overcloud:\n hosts: <% $.ssh_servers.toDict($, {}) %>\n remote_user: <% $.overcloud_admin %>\n ssh_private_key: <% $.privkey %>\n extra_env_variables: <% $.ansible_extra_env_variables %>\n playbook:\n - hosts: overcloud\n gather_facts: no\n tasks:\n - name: wait for connection\n wait_for_connection:\n sleep: 5\n timeout: 300\n\n create_admin_via_ssh:\n input:\n - tasks\n - ssh_private_key\n - ssh_user\n - ssh_servers\n - ansible_extra_env_variables:\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n\n tags:\n - tripleo-common-managed\n tasks:\n write_tmp_playbook:\n action: tripleo.ansible-playbook\n input:\n inventory:\n overcloud:\n hosts: <% $.ssh_servers.toDict($, {}) %>\n remote_user: <% $.ssh_user %>\n ssh_private_key: <% $.ssh_private_key %>\n extra_env_variables: <% $.ansible_extra_env_variables %>\n become: true\n become_user: root\n playbook:\n - hosts: overcloud\n tasks: <% $.tasks %>\n", "name": "tripleo.access.v1", "tags": [], "created_at": "2018-06-26 05:45:10", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "86ed3424-fcb7-40a6-af10-ceec8b50495a"} > >2018-06-26 11:15:10,887 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:10,888 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.stack.v1 >description: TripleO Stack Workflows > >workflows: > > wait_for_stack_complete_or_failed: > input: > - stack > - timeout: 14400 # 4 hours. Default timeout of stack deployment > > tags: > - tripleo-common-managed > > tasks: > > wait_for_stack_status: > action: heat.stacks_get stack_id=<% $.stack %> > timeout: <% $.timeout %> > retry: > delay: 15 > count: <% $.timeout / 15 %> > continue-on: <% task().result.stack_status in ['CREATE_IN_PROGRESS', 'UPDATE_IN_PROGRESS', 'DELETE_IN_PROGRESS'] %> > > wait_for_stack_in_progress: > input: > - stack > - timeout: 600 # 10 minutes. Should not take much longer for a stack to transition to IN_PROGRESS > > tags: > - tripleo-common-managed > > tasks: > > wait_for_stack_status: > action: heat.stacks_get stack_id=<% $.stack %> > timeout: <% $.timeout %> > retry: > delay: 15 > count: <% $.timeout / 15 %> > continue-on: <% task().result.stack_status in ['CREATE_COMPLETE', 'CREATE_FAILED', 'UPDATE_COMPLETE', 'UPDATE_FAILED', 'DELETE_FAILED'] %> > > wait_for_stack_does_not_exist: > input: > - stack > - timeout: 3600 > > tags: > - tripleo-common-managed > > tasks: > wait_for_stack_does_not_exist: > action: heat.stacks_list > timeout: <% $.timeout %> > retry: > delay: 15 > count: <% $.timeout / 15 %> > continue-on: <% $.stack in task(wait_for_stack_does_not_exist).result.select([$.stack_name, $.id]).flatten() %> > > delete_stack: > input: > - stack > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > delete_the_stack: > action: heat.stacks_delete stack_id=<% $.stack %> > on-success: wait_for_stack_does_not_exist > on-error: delete_the_stack_failed > > delete_the_stack_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(delete_the_stack).result %> > > wait_for_stack_does_not_exist: > workflow: tripleo.stack.v1.wait_for_stack_does_not_exist stack=<% $.stack %> > on-success: send_message > on-error: wait_for_stack_does_not_exist_failed > > wait_for_stack_does_not_exist_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(wait_for_stack_does_not_exist).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.scale.v1.delete_stack > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 11:15:11,098 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 3236 >2018-06-26 11:15:11,099 DEBUG: RESP: [201] Content-Length: 3236 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:11 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.stack.v1\ndescription: TripleO Stack Workflows\n\nworkflows:\n\n wait_for_stack_complete_or_failed:\n input:\n - stack\n - timeout: 14400 # 4 hours. Default timeout of stack deployment\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n wait_for_stack_status:\n action: heat.stacks_get stack_id=<% $.stack %>\n timeout: <% $.timeout %>\n retry:\n delay: 15\n count: <% $.timeout / 15 %>\n continue-on: <% task().result.stack_status in ['CREATE_IN_PROGRESS', 'UPDATE_IN_PROGRESS', 'DELETE_IN_PROGRESS'] %>\n\n wait_for_stack_in_progress:\n input:\n - stack\n - timeout: 600 # 10 minutes. Should not take much longer for a stack to transition to IN_PROGRESS\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n wait_for_stack_status:\n action: heat.stacks_get stack_id=<% $.stack %>\n timeout: <% $.timeout %>\n retry:\n delay: 15\n count: <% $.timeout / 15 %>\n continue-on: <% task().result.stack_status in ['CREATE_COMPLETE', 'CREATE_FAILED', 'UPDATE_COMPLETE', 'UPDATE_FAILED', 'DELETE_FAILED'] %>\n\n wait_for_stack_does_not_exist:\n input:\n - stack\n - timeout: 3600\n\n tags:\n - tripleo-common-managed\n\n tasks:\n wait_for_stack_does_not_exist:\n action: heat.stacks_list\n timeout: <% $.timeout %>\n retry:\n delay: 15\n count: <% $.timeout / 15 %>\n continue-on: <% $.stack in task(wait_for_stack_does_not_exist).result.select([$.stack_name, $.id]).flatten() %>\n\n delete_stack:\n input:\n - stack\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n delete_the_stack:\n action: heat.stacks_delete stack_id=<% $.stack %>\n on-success: wait_for_stack_does_not_exist\n on-error: delete_the_stack_failed\n\n delete_the_stack_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(delete_the_stack).result %>\n\n wait_for_stack_does_not_exist:\n workflow: tripleo.stack.v1.wait_for_stack_does_not_exist stack=<% $.stack %>\n on-success: send_message\n on-error: wait_for_stack_does_not_exist_failed\n\n wait_for_stack_does_not_exist_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(wait_for_stack_does_not_exist).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.scale.v1.delete_stack\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.stack.v1", "tags": [], "created_at": "2018-06-26 05:45:11", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "5dbd4422-24c7-462c-b547-1e9eb365bcd5"} > >2018-06-26 11:15:11,099 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:11,100 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.validations.v1 >description: TripleO Validations Workflows v1 > >workflows: > > run_validation: > input: > - validation_name > - plan: overcloud > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > notify_running: > on-complete: run_validation > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.run_validation > payload: > validation_name: <% $.validation_name %> > plan: <% $.plan %> > status: RUNNING > execution: <% execution() %> > > run_validation: > on-success: send_message > on-error: set_status_failed > action: tripleo.validations.run_validation validation=<% $.validation_name %> plan=<% $.plan %> > publish: > status: SUCCESS > stdout: <% task().result.stdout %> > stderr: <% task().result.stderr %> > > set_status_failed: > on-complete: send_message > publish: > status: FAILED > stdout: <% task(run_validation).result.stdout %> > stderr: <% task(run_validation).result.stderr %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.run_validation > payload: > validation_name: <% $.validation_name %> > plan: <% $.plan %> > status: <% $.get('status', 'SUCCESS') %> > stdout: <% $.stdout %> > stderr: <% $.stderr %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > run_validations: > input: > - validation_names: [] > - plan: overcloud > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > notify_running: > on-complete: run_validations > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.run_validations > payload: > validation_names: <% $.validation_names %> > plan: <% $.plan %> > status: RUNNING > execution: <% execution() %> > > run_validations: > on-success: send_message > on-error: set_status_failed > workflow: tripleo.validations.v1.run_validation validation_name=<% $.validation %> plan=<% $.plan %> queue_name=<% $.queue_name %> > with-items: validation in <% $.validation_names %> > publish: > status: SUCCESS > > set_status_failed: > on-complete: send_message > publish: > status: FAILED > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.run_validations > payload: > validation_names: <% $.validation_names %> > plan: <% $.plan %> > status: <% $.get('status', 'SUCCESS') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > run_groups: > input: > - group_names: [] > - plan: overcloud > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > find_validations: > on-success: notify_running > action: tripleo.validations.list_validations groups=<% $.group_names %> > publish: > validations: <% task().result %> > > notify_running: > on-complete: run_validation_group > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.run_validations > payload: > group_names: <% $.group_names %> > validation_names: <% $.validations.id %> > plan: <% $.plan %> > status: RUNNING > execution: <% execution() %> > > run_validation_group: > on-success: send_message > on-error: set_status_failed > workflow: tripleo.validations.v1.run_validation validation_name=<% $.validation %> plan=<% $.plan %> queue_name=<% $.queue_name %> > with-items: validation in <% $.validations.id %> > publish: > status: SUCCESS > > set_status_failed: > on-complete: send_message > publish: > status: FAILED > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.run_groups > payload: > group_names: <% $.group_names %> > validation_names: <% $.validations.id %> > plan: <% $.plan %> > status: <% $.get('status', 'SUCCESS') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > list: > input: > - group_names: [] > tags: > - tripleo-common-managed > tasks: > find_validations: > action: tripleo.validations.list_validations groups=<% $.group_names %> > > list_groups: > tags: > - tripleo-common-managed > tasks: > find_groups: > action: tripleo.validations.list_groups > > add_validation_ssh_key_parameter: > input: > - container > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > test_validations_enabled: > action: tripleo.validations.enabled > on-success: get_pubkey > on-error: unset_validation_key_parameter > > get_pubkey: > action: tripleo.validations.get_pubkey > on-success: set_validation_key_parameter > publish: > pubkey: <% task().result %> > > set_validation_key_parameter: > action: tripleo.parameters.update > input: > parameters: > node_admin_extra_ssh_keys: <% $.pubkey %> > container: <% $.container %> > > # NOTE(shadower): We need to clear keys from a previous deployment > unset_validation_key_parameter: > action: tripleo.parameters.update > input: > parameters: > node_admin_extra_ssh_keys: "" > container: <% $.container %> > > copy_ssh_key: > input: > # FIXME: we should stop using heat-admin as e.g. split-stack > # environments (where Nova didn't create overcloud nodes) don't > # have it present > - overcloud_admin: heat-admin > - queue_name: tripleo > tags: > - tripleo-common-managed > tasks: > get_servers: > action: nova.servers_list > on-success: get_pubkey > publish: > servers: <% task().result._info %> > > get_pubkey: > action: tripleo.validations.get_pubkey > on-success: deploy_ssh_key > publish: > pubkey: <% task().result %> > > deploy_ssh_key: > workflow: tripleo.deployment.v1.deploy_on_server > with-items: server in <% $.servers %> > input: > server_name: <% $.server.name %> > server_uuid: <% $.server.id %> > config: | > #!/bin/bash > if ! grep "<% $.pubkey %>" /home/<% $.overcloud_admin %>/.ssh/authorized_keys; then > echo "<% $.pubkey %>" >> /home/<% $.overcloud_admin %>/.ssh/authorized_keys > fi > config_name: copy_ssh_key > group: script > queue_name: <% $.queue_name %> > > check_boot_images: > input: > - deploy_kernel_name: 'bm-deploy-kernel' > - deploy_ramdisk_name: 'bm-deploy-ramdisk' > - run_validations: true > - queue_name: tripleo > output: > errors: <% $.errors %> > warnings: <% $.warnings %> > kernel_id: <% $.kernel_id %> > ramdisk_id: <% $.ramdisk_id %> > tags: > - tripleo-common-managed > tasks: > check_run_validations: > on-complete: > - get_images: <% $.run_validations %> > - send_message: <% not $.run_validations %> > > get_images: > action: glance.images_list > on-success: check_images > publish: > images: <% task().result %> > > check_images: > action: tripleo.validations.check_boot_images > input: > images: <% $.images %> > deploy_kernel_name: <% $.deploy_kernel_name %> > deploy_ramdisk_name: <% $.deploy_ramdisk_name %> > on-success: send_message > publish: > kernel_id: <% task().result.kernel_id %> > ramdisk_id: <% task().result.ramdisk_id %> > warnings: <% task().result.warnings %> > errors: <% task().result.errors %> > on-error: send_message > publish-on-error: > kernel_id: <% task().result.kernel_id %> > ramdisk_id: <% task().result.ramdisk_id %> > warnings: <% task().result.warnings %> > errors: <% task().result.errors %> > status: FAILED > message: <% task().result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.check_boot_images > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > kernel_id: <% $.kernel_id %> > ramdisk_id: <% $.ramdisk_id %> > errors: <% $.errors %> > warnings: <% $.warnings %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > collect_flavors: > input: > - roles_info: {} > - run_validations: true > - queue_name: tripleo > output: > errors: <% $.errors %> > warnings: <% $.warnings %> > flavors: <% $.flavors %> > > tags: > - tripleo-common-managed > > tasks: > check_run_validations: > on-complete: > - check_flavors: <% $.run_validations %> > - send_message: <% not $.run_validations %> > > check_flavors: > action: tripleo.validations.check_flavors > input: > roles_info: <% $.roles_info %> > on-success: send_message > publish: > flavors: <% task().result.flavors %> > errors: <% task().result.errors %> > warnings: <% task().result.warnings %> > on-error: send_message > publish-on-error: > flavors: {} > errors: <% task().result.errors %> > warnings: <% task().result.warnings %> > status: FAILED > message: <% task().result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.collect_flavors > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > flavors: <% $.flavors %> > errors: <% $.errors %> > warnings: <% $.warnings %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > check_ironic_boot_configuration: > input: > - kernel_id: null > - ramdisk_id: null > - run_validations: true > - queue_name: tripleo > output: > errors: <% $.errors %> > warnings: <% $.warnings %> > > tags: > - tripleo-common-managed > > tasks: > check_run_validations: > on-complete: > - get_ironic_nodes: <% $.run_validations %> > - send_message: <% not $.run_validations %> > > get_ironic_nodes: > action: ironic.node_list > input: > provision_state: available > maintenance: false > detail: true > on-success: check_node_boot_configuration > publish: > nodes: <% task().result %> > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > check_node_boot_configuration: > action: tripleo.validations.check_node_boot_configuration > input: > node: <% $.node %> > kernel_id: <% $.kernel_id %> > ramdisk_id: <% $.ramdisk_id %> > with-items: node in <% $.nodes %> > on-success: send_message > publish: > errors: <% task().result.errors.flatten() %> > warnings: <% task().result.warnings.flatten() %> > on-error: send_message > publish-on-error: > errors: <% task().result.errors.flatten() %> > warnings: <% task().result.warnings.flatten() %> > status: FAILED > message: <% task().result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.check_ironic_boot_configuration > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > errors: <% $.errors %> > warnings: <% $.warnings %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > verify_profiles: > input: > - flavors: [] > - run_validations: true > - queue_name: tripleo > output: > errors: <% $.errors %> > warnings: <% $.warnings %> > > tags: > - tripleo-common-managed > > tasks: > check_run_validations: > on-complete: > - get_ironic_nodes: <% $.run_validations %> > - send_message: <% not $.run_validations %> > > get_ironic_nodes: > action: ironic.node_list > input: > maintenance: false > detail: true > on-success: verify_profiles > publish: > nodes: <% task().result %> > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > verify_profiles: > action: tripleo.validations.verify_profiles > input: > nodes: <% $.nodes %> > flavors: <% $.flavors %> > on-success: send_message > publish: > errors: <% task().result.errors %> > warnings: <% task().result.warnings %> > on-error: send_message > publish-on-error: > errors: <% task().result.errors %> > warnings: <% task().result.warnings %> > status: FAILED > message: <% task().result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.verify_profiles > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > errors: <% $.errors %> > warnings: <% $.warnings %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > check_default_nodes_count: > input: > - stack_id: overcloud > - parameters: {} > - default_role_counts: {} > - run_validations: true > - queue_name: tripleo > output: > statistics: <% $.statistics %> > errors: <% $.errors %> > warnings: <% $.warnings %> > > tags: > - tripleo-common-managed > > tasks: > check_run_validations: > on-complete: > - get_hypervisor_statistics: <% $.run_validations %> > - send_message: <% not $.run_validations %> > > get_hypervisor_statistics: > action: nova.hypervisors_statistics > on-success: get_stack > publish: > statistics: <% task().result %> > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > errors: [] > warnings: [] > statistics: null > > get_stack: > action: heat.stacks_get > input: > stack_id: <% $.stack_id %> > on-success: get_associated_nodes > publish: > stack: <% task().result %> > on-error: get_associated_nodes > publish-on-error: > stack: null > > get_associated_nodes: > action: ironic.node_list > input: > associated: true > on-success: get_available_nodes > publish: > associated_nodes: <% task().result %> > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > errors: [] > warnings: [] > > get_available_nodes: > action: ironic.node_list > input: > provision_state: available > associated: false > maintenance: false > on-success: check_nodes_count > publish: > available_nodes: <% task().result %> > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > errors: [] > warnings: [] > > check_nodes_count: > action: tripleo.validations.check_nodes_count > input: > statistics: <% $.statistics %> > stack: <% $.stack %> > associated_nodes: <% $.associated_nodes %> > available_nodes: <% $.available_nodes %> > parameters: <% $.parameters %> > default_role_counts: <% $.default_role_counts %> > on-success: send_message > publish: > errors: <% task().result.errors %> > warnings: <% task().result.warnings %> > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > statistics: null > errors: <% task().result.errors %> > warnings: <% task().result.warnings %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.check_hypervisor_stats > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > statistics: <% $.statistics %> > errors: <% $.errors %> > warnings: <% $.warnings %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > check_pre_deployment_validations: > input: > - deploy_kernel_name: 'bm-deploy-kernel' > - deploy_ramdisk_name: 'bm-deploy-ramdisk' > - roles_info: {} > - stack_id: overcloud > - parameters: {} > - default_role_counts: {} > - run_validations: true > - queue_name: tripleo > > output: > errors: <% $.errors %> > warnings: <% $.warnings %> > kernel_id: <% $.kernel_id %> > ramdisk_id: <% $.ramdisk_id %> > flavors: <% $.flavors %> > statistics: <% $.statistics %> > tags: > - tripleo-common-managed > tasks: > init_messages: > on-success: check_boot_images > publish: > errors: [] > warnings: [] > > check_boot_images: > workflow: check_boot_images > input: > deploy_kernel_name: <% $.deploy_kernel_name %> > deploy_ramdisk_name: <% $.deploy_ramdisk_name %> > run_validations: <% $.run_validations %> > queue_name: <% $.queue_name %> > publish: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > kernel_id: <% task().result.get('kernel_id') %> > ramdisk_id: <% task().result.get('ramdisk_id') %> > publish-on-error: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > kernel_id: <% task().result.get('kernel_id') %> > ramdisk_id: <% task().result.get('ramdisk_id') %> > status: FAILED > on-success: collect_flavors > on-error: collect_flavors > > collect_flavors: > workflow: collect_flavors > input: > roles_info: <% $.roles_info %> > run_validations: <% $.run_validations %> > queue_name: <% $.queue_name %> > publish: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > flavors: <% task().result.get('flavors') %> > publish-on-error: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > flavors: <% task().result.get('flavors') %> > status: FAILED > on-success: check_ironic_boot_configuration > on-error: check_ironic_boot_configuration > > check_ironic_boot_configuration: > workflow: check_ironic_boot_configuration > input: > kernel_id: <% $.kernel_id %> > ramdisk_id: <% $.ramdisk_id %> > run_validations: <% $.run_validations %> > queue_name: <% $.queue_name %> > publish: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > publish-on-error: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > status: FAILED > on-success: check_default_nodes_count > on-error: check_default_nodes_count > > check_default_nodes_count: > workflow: check_default_nodes_count > # ironic-nova sync happens once in two minutes > retry: count=12 delay=10 > input: > stack_id: <% $.stack_id %> > parameters: <% $.parameters %> > default_role_counts: <% $.default_role_counts %> > run_validations: <% $.run_validations %> > queue_name: <% $.queue_name %> > publish: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > statistics: <% task().result.get('statistics') %> > publish-on-error: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > statistics: <% task().result.get('statistics') %> > status: FAILED > on-success: verify_profiles > # Do not confuse user with info about profiles if the nodes > # count is off in the first place. Skip directly to > # send_message. (bug 1703942) > on-error: send_message > > verify_profiles: > workflow: verify_profiles > input: > flavors: <% $.flavors %> > run_validations: <% $.run_validations %> > queue_name: <% $.queue_name %> > publish: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > publish-on-error: > errors: <% $.errors + task().result.get('errors', []) %> > warnings: <% $.warnings + task().result.get('warnings', []) %> > status: FAILED > on-success: send_message > on-error: send_message > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.validations.v1.check_hypervisor_stats > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > kernel_id: <% $.kernel_id %> > ramdisk_id: <% $.ramdisk_id %> > flavors: <% $.flavors %> > statistics: <% $.statistics %> > errors: <% $.errors %> > warnings: <% $.warnings %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 11:15:12,410 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 25434 >2018-06-26 11:15:12,450 DEBUG: RESP: [201] Content-Length: 25434 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:12 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.validations.v1\ndescription: TripleO Validations Workflows v1\n\nworkflows:\n\n run_validation:\n input:\n - validation_name\n - plan: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n notify_running:\n on-complete: run_validation\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validation\n payload:\n validation_name: <% $.validation_name %>\n plan: <% $.plan %>\n status: RUNNING\n execution: <% execution() %>\n\n run_validation:\n on-success: send_message\n on-error: set_status_failed\n action: tripleo.validations.run_validation validation=<% $.validation_name %> plan=<% $.plan %>\n publish:\n status: SUCCESS\n stdout: <% task().result.stdout %>\n stderr: <% task().result.stderr %>\n\n set_status_failed:\n on-complete: send_message\n publish:\n status: FAILED\n stdout: <% task(run_validation).result.stdout %>\n stderr: <% task(run_validation).result.stderr %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validation\n payload:\n validation_name: <% $.validation_name %>\n plan: <% $.plan %>\n status: <% $.get('status', 'SUCCESS') %>\n stdout: <% $.stdout %>\n stderr: <% $.stderr %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n run_validations:\n input:\n - validation_names: []\n - plan: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n notify_running:\n on-complete: run_validations\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validations\n payload:\n validation_names: <% $.validation_names %>\n plan: <% $.plan %>\n status: RUNNING\n execution: <% execution() %>\n\n run_validations:\n on-success: send_message\n on-error: set_status_failed\n workflow: tripleo.validations.v1.run_validation validation_name=<% $.validation %> plan=<% $.plan %> queue_name=<% $.queue_name %>\n with-items: validation in <% $.validation_names %>\n publish:\n status: SUCCESS\n\n set_status_failed:\n on-complete: send_message\n publish:\n status: FAILED\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validations\n payload:\n validation_names: <% $.validation_names %>\n plan: <% $.plan %>\n status: <% $.get('status', 'SUCCESS') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n run_groups:\n input:\n - group_names: []\n - plan: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n find_validations:\n on-success: notify_running\n action: tripleo.validations.list_validations groups=<% $.group_names %>\n publish:\n validations: <% task().result %>\n\n notify_running:\n on-complete: run_validation_group\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_validations\n payload:\n group_names: <% $.group_names %>\n validation_names: <% $.validations.id %>\n plan: <% $.plan %>\n status: RUNNING\n execution: <% execution() %>\n\n run_validation_group:\n on-success: send_message\n on-error: set_status_failed\n workflow: tripleo.validations.v1.run_validation validation_name=<% $.validation %> plan=<% $.plan %> queue_name=<% $.queue_name %>\n with-items: validation in <% $.validations.id %>\n publish:\n status: SUCCESS\n\n set_status_failed:\n on-complete: send_message\n publish:\n status: FAILED\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.run_groups\n payload:\n group_names: <% $.group_names %>\n validation_names: <% $.validations.id %>\n plan: <% $.plan %>\n status: <% $.get('status', 'SUCCESS') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n list:\n input:\n - group_names: []\n tags:\n - tripleo-common-managed\n tasks:\n find_validations:\n action: tripleo.validations.list_validations groups=<% $.group_names %>\n\n list_groups:\n tags:\n - tripleo-common-managed\n tasks:\n find_groups:\n action: tripleo.validations.list_groups\n\n add_validation_ssh_key_parameter:\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n test_validations_enabled:\n action: tripleo.validations.enabled\n on-success: get_pubkey\n on-error: unset_validation_key_parameter\n\n get_pubkey:\n action: tripleo.validations.get_pubkey\n on-success: set_validation_key_parameter\n publish:\n pubkey: <% task().result %>\n\n set_validation_key_parameter:\n action: tripleo.parameters.update\n input:\n parameters:\n node_admin_extra_ssh_keys: <% $.pubkey %>\n container: <% $.container %>\n\n # NOTE(shadower): We need to clear keys from a previous deployment\n unset_validation_key_parameter:\n action: tripleo.parameters.update\n input:\n parameters:\n node_admin_extra_ssh_keys: \"\"\n container: <% $.container %>\n\n copy_ssh_key:\n input:\n # FIXME: we should stop using heat-admin as e.g. split-stack\n # environments (where Nova didn't create overcloud nodes) don't\n # have it present\n - overcloud_admin: heat-admin\n - queue_name: tripleo\n tags:\n - tripleo-common-managed\n tasks:\n get_servers:\n action: nova.servers_list\n on-success: get_pubkey\n publish:\n servers: <% task().result._info %>\n\n get_pubkey:\n action: tripleo.validations.get_pubkey\n on-success: deploy_ssh_key\n publish:\n pubkey: <% task().result %>\n\n deploy_ssh_key:\n workflow: tripleo.deployment.v1.deploy_on_server\n with-items: server in <% $.servers %>\n input:\n server_name: <% $.server.name %>\n server_uuid: <% $.server.id %>\n config: |\n #!/bin/bash\n if ! grep \"<% $.pubkey %>\" /home/<% $.overcloud_admin %>/.ssh/authorized_keys; then\n echo \"<% $.pubkey %>\" >> /home/<% $.overcloud_admin %>/.ssh/authorized_keys\n fi\n config_name: copy_ssh_key\n group: script\n queue_name: <% $.queue_name %>\n\n check_boot_images:\n input:\n - deploy_kernel_name: 'bm-deploy-kernel'\n - deploy_ramdisk_name: 'bm-deploy-ramdisk'\n - run_validations: true\n - queue_name: tripleo\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n tags:\n - tripleo-common-managed\n tasks:\n check_run_validations:\n on-complete:\n - get_images: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n get_images:\n action: glance.images_list\n on-success: check_images\n publish:\n images: <% task().result %>\n\n check_images:\n action: tripleo.validations.check_boot_images\n input:\n images: <% $.images %>\n deploy_kernel_name: <% $.deploy_kernel_name %>\n deploy_ramdisk_name: <% $.deploy_ramdisk_name %>\n on-success: send_message\n publish:\n kernel_id: <% task().result.kernel_id %>\n ramdisk_id: <% task().result.ramdisk_id %>\n warnings: <% task().result.warnings %>\n errors: <% task().result.errors %>\n on-error: send_message\n publish-on-error:\n kernel_id: <% task().result.kernel_id %>\n ramdisk_id: <% task().result.ramdisk_id %>\n warnings: <% task().result.warnings %>\n errors: <% task().result.errors %>\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.check_boot_images\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n collect_flavors:\n input:\n - roles_info: {}\n - run_validations: true\n - queue_name: tripleo\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n flavors: <% $.flavors %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_run_validations:\n on-complete:\n - check_flavors: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n check_flavors:\n action: tripleo.validations.check_flavors\n input:\n roles_info: <% $.roles_info %>\n on-success: send_message\n publish:\n flavors: <% task().result.flavors %>\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n on-error: send_message\n publish-on-error:\n flavors: {}\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.collect_flavors\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n flavors: <% $.flavors %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n check_ironic_boot_configuration:\n input:\n - kernel_id: null\n - ramdisk_id: null\n - run_validations: true\n - queue_name: tripleo\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_run_validations:\n on-complete:\n - get_ironic_nodes: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n get_ironic_nodes:\n action: ironic.node_list\n input:\n provision_state: available\n maintenance: false\n detail: true\n on-success: check_node_boot_configuration\n publish:\n nodes: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n check_node_boot_configuration:\n action: tripleo.validations.check_node_boot_configuration\n input:\n node: <% $.node %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n with-items: node in <% $.nodes %>\n on-success: send_message\n publish:\n errors: <% task().result.errors.flatten() %>\n warnings: <% task().result.warnings.flatten() %>\n on-error: send_message\n publish-on-error:\n errors: <% task().result.errors.flatten() %>\n warnings: <% task().result.warnings.flatten() %>\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.check_ironic_boot_configuration\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n verify_profiles:\n input:\n - flavors: []\n - run_validations: true\n - queue_name: tripleo\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_run_validations:\n on-complete:\n - get_ironic_nodes: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n get_ironic_nodes:\n action: ironic.node_list\n input:\n maintenance: false\n detail: true\n on-success: verify_profiles\n publish:\n nodes: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n verify_profiles:\n action: tripleo.validations.verify_profiles\n input:\n nodes: <% $.nodes %>\n flavors: <% $.flavors %>\n on-success: send_message\n publish:\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n on-error: send_message\n publish-on-error:\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.verify_profiles\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n check_default_nodes_count:\n input:\n - stack_id: overcloud\n - parameters: {}\n - default_role_counts: {}\n - run_validations: true\n - queue_name: tripleo\n output:\n statistics: <% $.statistics %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_run_validations:\n on-complete:\n - get_hypervisor_statistics: <% $.run_validations %>\n - send_message: <% not $.run_validations %>\n\n get_hypervisor_statistics:\n action: nova.hypervisors_statistics\n on-success: get_stack\n publish:\n statistics: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n errors: []\n warnings: []\n statistics: null\n\n get_stack:\n action: heat.stacks_get\n input:\n stack_id: <% $.stack_id %>\n on-success: get_associated_nodes\n publish:\n stack: <% task().result %>\n on-error: get_associated_nodes\n publish-on-error:\n stack: null\n\n get_associated_nodes:\n action: ironic.node_list\n input:\n associated: true\n on-success: get_available_nodes\n publish:\n associated_nodes: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n errors: []\n warnings: []\n\n get_available_nodes:\n action: ironic.node_list\n input:\n provision_state: available\n associated: false\n maintenance: false\n on-success: check_nodes_count\n publish:\n available_nodes: <% task().result %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n errors: []\n warnings: []\n\n check_nodes_count:\n action: tripleo.validations.check_nodes_count\n input:\n statistics: <% $.statistics %>\n stack: <% $.stack %>\n associated_nodes: <% $.associated_nodes %>\n available_nodes: <% $.available_nodes %>\n parameters: <% $.parameters %>\n default_role_counts: <% $.default_role_counts %>\n on-success: send_message\n publish:\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n statistics: null\n errors: <% task().result.errors %>\n warnings: <% task().result.warnings %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.check_hypervisor_stats\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n statistics: <% $.statistics %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n check_pre_deployment_validations:\n input:\n - deploy_kernel_name: 'bm-deploy-kernel'\n - deploy_ramdisk_name: 'bm-deploy-ramdisk'\n - roles_info: {}\n - stack_id: overcloud\n - parameters: {}\n - default_role_counts: {}\n - run_validations: true\n - queue_name: tripleo\n\n output:\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n flavors: <% $.flavors %>\n statistics: <% $.statistics %>\n tags:\n - tripleo-common-managed\n tasks:\n init_messages:\n on-success: check_boot_images\n publish:\n errors: []\n warnings: []\n\n check_boot_images:\n workflow: check_boot_images\n input:\n deploy_kernel_name: <% $.deploy_kernel_name %>\n deploy_ramdisk_name: <% $.deploy_ramdisk_name %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n kernel_id: <% task().result.get('kernel_id') %>\n ramdisk_id: <% task().result.get('ramdisk_id') %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n kernel_id: <% task().result.get('kernel_id') %>\n ramdisk_id: <% task().result.get('ramdisk_id') %>\n status: FAILED\n on-success: collect_flavors\n on-error: collect_flavors\n\n collect_flavors:\n workflow: collect_flavors\n input:\n roles_info: <% $.roles_info %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n flavors: <% task().result.get('flavors') %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n flavors: <% task().result.get('flavors') %>\n status: FAILED\n on-success: check_ironic_boot_configuration\n on-error: check_ironic_boot_configuration\n\n check_ironic_boot_configuration:\n workflow: check_ironic_boot_configuration\n input:\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n status: FAILED\n on-success: check_default_nodes_count\n on-error: check_default_nodes_count\n\n check_default_nodes_count:\n workflow: check_default_nodes_count\n # ironic-nova sync happens once in two minutes\n retry: count=12 delay=10\n input:\n stack_id: <% $.stack_id %>\n parameters: <% $.parameters %>\n default_role_counts: <% $.default_role_counts %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n statistics: <% task().result.get('statistics') %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n statistics: <% task().result.get('statistics') %>\n status: FAILED\n on-success: verify_profiles\n # Do not confuse user with info about profiles if the nodes\n # count is off in the first place. Skip directly to\n # send_message. (bug 1703942)\n on-error: send_message\n\n verify_profiles:\n workflow: verify_profiles\n input:\n flavors: <% $.flavors %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n publish-on-error:\n errors: <% $.errors + task().result.get('errors', []) %>\n warnings: <% $.warnings + task().result.get('warnings', []) %>\n status: FAILED\n on-success: send_message\n on-error: send_message\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.validations.v1.check_hypervisor_stats\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n kernel_id: <% $.kernel_id %>\n ramdisk_id: <% $.ramdisk_id %>\n flavors: <% $.flavors %>\n statistics: <% $.statistics %>\n errors: <% $.errors %>\n warnings: <% $.warnings %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.validations.v1", "tags": [], "created_at": "2018-06-26 05:45:12", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "20586fbd-2eac-4515-9164-4f08baf9a951"} > >2018-06-26 11:15:12,450 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:12,451 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.derive_params_formulas.v1 >description: TripleO Workflows to derive deployment parameters from the introspected data > >workflows: > > > dpdk_derive_params: > description: > > Workflow to derive parameters for DPDK service. > input: > - plan > - role_name > - hw_data # introspection data > - user_inputs > - derived_parameters: {} > > output: > derived_parameters: <% $.derived_parameters.mergeWith($.get('dpdk_parameters', {})) %> > > tags: > - tripleo-common-managed > > tasks: > get_network_config: > action: tripleo.parameters.get_network_config > input: > container: <% $.plan %> > role_name: <% $.role_name %> > publish: > network_configs: <% task().result.get('network_config', []) %> > on-success: get_dpdk_nics_numa_info > on-error: set_status_failed_get_network_config > > get_dpdk_nics_numa_info: > action: tripleo.derive_params.get_dpdk_nics_numa_info > input: > network_configs: <% $.network_configs %> > inspect_data: <% $.hw_data %> > publish: > dpdk_nics_numa_info: <% task().result %> > on-success: > # TODO: Need to remove condtions here > # adding condition and throw error in action for empty check > - get_dpdk_nics_numa_nodes: <% $.dpdk_nics_numa_info %> > - set_status_failed_get_dpdk_nics_numa_info: <% not $.dpdk_nics_numa_info %> > on-error: set_status_failed_on_error_get_dpdk_nics_numa_info > > get_dpdk_nics_numa_nodes: > publish: > dpdk_nics_numa_nodes: <% $.dpdk_nics_numa_info.groupBy($.numa_node).select($[0]).orderBy($) %> > on-success: > - get_numa_nodes: <% $.dpdk_nics_numa_nodes %> > - set_status_failed_get_dpdk_nics_numa_nodes: <% not $.dpdk_nics_numa_nodes %> > > get_numa_nodes: > publish: > numa_nodes: <% $.hw_data.numa_topology.ram.select($.numa_node).orderBy($) %> > on-success: > - get_num_phy_cores_per_numa_for_pmd: <% $.numa_nodes %> > - set_status_failed_get_numa_nodes: <% not $.numa_nodes %> > > get_num_phy_cores_per_numa_for_pmd: > publish: > num_phy_cores_per_numa_node_for_pmd: <% $.user_inputs.get('num_phy_cores_per_numa_node_for_pmd', 0) %> > on-success: > - get_num_cores_per_numa_nodes: <% isInteger($.num_phy_cores_per_numa_node_for_pmd) and $.num_phy_cores_per_numa_node_for_pmd > 0 %> > - set_status_failed_get_num_phy_cores_per_numa_for_pmd_invalid: <% not isInteger($.num_phy_cores_per_numa_node_for_pmd) %> > - set_status_failed_get_num_phy_cores_per_numa_for_pmd_not_provided: <% $.num_phy_cores_per_numa_node_for_pmd = 0 %> > > # For NUMA node with DPDK nic, number of cores should be used from user input > # For NUMA node without DPDK nic, number of cores should be 1 > get_num_cores_per_numa_nodes: > publish: > num_cores_per_numa_nodes: <% let(dpdk_nics_nodes => $.dpdk_nics_numa_nodes, cores => $.num_phy_cores_per_numa_node_for_pmd) -> $.numa_nodes.select(switch($ in $dpdk_nics_nodes => $cores, not $ in $dpdk_nics_nodes => 1)) %> > on-success: get_pmd_cpus > > get_pmd_cpus: > action: tripleo.derive_params.get_dpdk_core_list > input: > inspect_data: <% $.hw_data %> > numa_nodes_cores_count: <% $.num_cores_per_numa_nodes %> > publish: > pmd_cpus: <% task().result %> > on-success: > - get_pmd_cpus_range_list: <% $.pmd_cpus %> > - set_status_failed_get_pmd_cpus: <% not $.pmd_cpus %> > on-error: set_status_failed_on_error_get_pmd_cpus > > get_pmd_cpus_range_list: > action: tripleo.derive_params.convert_number_to_range_list > input: > num_list: <% $.pmd_cpus %> > publish: > pmd_cpus: <% task().result %> > on-success: get_host_cpus > on-error: set_status_failed_get_pmd_cpus_range_list > > get_host_cpus: > workflow: tripleo.derive_params_formulas.v1.get_host_cpus > input: > role_name: <% $.role_name %> > hw_data: <% $.hw_data %> > publish: > host_cpus: <% task().result.get('host_cpus', '') %> > on-success: get_sock_mem > on-error: set_status_failed_get_host_cpus > > get_sock_mem: > action: tripleo.derive_params.get_dpdk_socket_memory > input: > dpdk_nics_numa_info: <% $.dpdk_nics_numa_info %> > numa_nodes: <% $.numa_nodes %> > overhead: <% $.user_inputs.get('overhead', 800) %> > packet_size_in_buffer: <% 4096*64 %> > publish: > sock_mem: <% task().result %> > on-success: > - get_dpdk_parameters: <% $.sock_mem %> > - set_status_failed_get_sock_mem: <% not $.sock_mem %> > on-error: set_status_failed_on_error_get_sock_mem > > get_dpdk_parameters: > publish: > dpdk_parameters: <% dict(concat($.role_name, 'Parameters') => dict('OvsPmdCoreList' => $.get('pmd_cpus', ''), 'OvsDpdkCoreList' => $.get('host_cpus', ''), 'OvsDpdkSocketMemory' => $.get('sock_mem', ''))) %> > > set_status_failed_get_network_config: > publish: > status: FAILED > message: <% task(get_network_config).result %> > on-success: fail > > set_status_failed_get_dpdk_nics_numa_info: > publish: > status: FAILED > message: "Unable to determine DPDK NIC's NUMA information" > on-success: fail > > set_status_failed_on_error_get_dpdk_nics_numa_info: > publish: > status: FAILED > message: <% task(get_dpdk_nics_numa_info).result %> > on-success: fail > > set_status_failed_get_dpdk_nics_numa_nodes: > publish: > status: FAILED > message: "Unable to determine DPDK NIC's numa nodes" > on-success: fail > > set_status_failed_get_numa_nodes: > publish: > status: FAILED > message: 'Unable to determine available NUMA nodes' > on-success: fail > > set_status_failed_get_num_phy_cores_per_numa_for_pmd_invalid: > publish: > status: FAILED > message: <% "num_phy_cores_per_numa_node_for_pmd user input '{0}' is invalid".format($.num_phy_cores_per_numa_node_for_pmd) %> > on-success: fail > > set_status_failed_get_num_phy_cores_per_numa_for_pmd_not_provided: > publish: > status: FAILED > message: 'num_phy_cores_per_numa_node_for_pmd user input is not provided' > on-success: fail > > set_status_failed_get_pmd_cpus: > publish: > status: FAILED > message: 'Unable to determine OvsPmdCoreList parameter' > on-success: fail > > set_status_failed_on_error_get_pmd_cpus: > publish: > status: FAILED > message: <% task(get_pmd_cpus).result %> > on-success: fail > > set_status_failed_get_pmd_cpus_range_list: > publish: > status: FAILED > message: <% task(get_pmd_cpus_range_list).result %> > on-success: fail > > set_status_failed_get_host_cpus: > publish: > status: FAILED > message: <% task(get_host_cpus).result.get('message', '') %> > on-success: fail > > set_status_failed_get_sock_mem: > publish: > status: FAILED > message: 'Unable to determine OvsDpdkSocketMemory parameter' > on-success: fail > > set_status_failed_on_error_get_sock_mem: > publish: > status: FAILED > message: <% task(get_sock_mem).result %> > on-success: fail > > > sriov_derive_params: > description: > > This workflow derives parameters for the SRIOV feature. > > input: > - role_name > - hw_data # introspection data > - derived_parameters: {} > > output: > derived_parameters: <% $.derived_parameters.mergeWith($.get('sriov_parameters', {})) %> > > tags: > - tripleo-common-managed > > tasks: > get_host_cpus: > workflow: tripleo.derive_params_formulas.v1.get_host_cpus > input: > role_name: <% $.role_name %> > hw_data: <% $.hw_data %> > publish: > host_cpus: <% task().result.get('host_cpus', '') %> > on-success: get_sriov_parameters > on-error: set_status_failed_get_host_cpus > > get_sriov_parameters: > publish: > # SriovHostCpusList parameter is added temporarily and it's removed later from derived parameters result. > sriov_parameters: <% dict(concat($.role_name, 'Parameters') => dict('SriovHostCpusList' => $.get('host_cpus', ''))) %> > > set_status_failed_get_host_cpus: > publish: > status: FAILED > message: <% task(get_host_cpus).result.get('message', '') %> > on-success: fail > > > get_host_cpus: > description: > > Fetching the host CPU list from the introspection data, and then converting the raw list into a range list. > > input: > - hw_data # introspection data > > output: > host_cpus: <% $.get('host_cpus', '') %> > > tags: > - tripleo-common-managed > > tasks: > get_host_cpus: > action: tripleo.derive_params.get_host_cpus_list inspect_data=<% $.hw_data %> > publish: > host_cpus: <% task().result %> > on-success: > - get_host_cpus_range_list: <% $.host_cpus %> > - set_status_failed_get_host_cpus: <% not $.host_cpus %> > on-error: set_status_failed_on_error_get_host_cpus > > get_host_cpus_range_list: > action: tripleo.derive_params.convert_number_to_range_list > input: > num_list: <% $.host_cpus %> > publish: > host_cpus: <% task().result %> > on-error: set_status_failed_get_host_cpus_range_list > > set_status_failed_get_host_cpus: > publish: > status: FAILED > message: 'Unable to determine host cpus' > on-success: fail > > set_status_failed_on_error_get_host_cpus: > publish: > status: FAILED > message: <% task(get_host_cpus).result %> > on-success: fail > > set_status_failed_get_host_cpus_range_list: > publish: > status: FAILED > message: <% task(get_host_cpus_range_list).result %> > on-success: fail > > > host_derive_params: > description: > > This workflow derives parameters for the Host process, and is mainly associated with CPU pinning and huge memory pages. > This workflow can be dependent on any feature or also can be invoked individually as well. > > input: > - role_name > - hw_data # introspection data > - user_inputs > - derived_parameters: {} > > output: > derived_parameters: <% $.derived_parameters.mergeWith($.get('host_parameters', {})) %> > > tags: > - tripleo-common-managed > > tasks: > get_cpus: > publish: > cpus: <% $.hw_data.numa_topology.cpus %> > on-success: > - get_role_derive_params: <% $.cpus %> > - set_status_failed_get_cpus: <% not $.cpus %> > > get_role_derive_params: > publish: > role_derive_params: <% $.derived_parameters.get(concat($.role_name, 'Parameters'), {}) %> > # removing the role parameters (eg. ComputeParameters) in derived_parameters dictionary since already copied in role_derive_params. > derived_parameters: <% $.derived_parameters.delete(concat($.role_name, 'Parameters')) %> > on-success: get_host_cpus > > get_host_cpus: > publish: > host_cpus: <% $.role_derive_params.get('OvsDpdkCoreList', '') or $.role_derive_params.get('SriovHostCpusList', '') %> > # SriovHostCpusList parameter is added temporarily for host_cpus and not needed in derived_parameters result. > # SriovHostCpusList parameter is deleted in derived_parameters list and adding the updated role parameters > # back in the derived_parameters. > derived_parameters: <% $.derived_parameters + dict(concat($.role_name, 'Parameters') => $.role_derive_params.delete('SriovHostCpusList')) %> > on-success: get_host_dpdk_combined_cpus > > get_host_dpdk_combined_cpus: > publish: > host_dpdk_combined_cpus: <% let(pmd_cpus => $.role_derive_params.get('OvsPmdCoreList', '')) -> switch($pmd_cpus => concat($pmd_cpus, ',', $.host_cpus), not $pmd_cpus => $.host_cpus) %> > reserved_cpus: [] > on-success: > - get_host_dpdk_combined_cpus_num_list: <% $.host_dpdk_combined_cpus %> > - set_status_failed_get_host_dpdk_combined_cpus: <% not $.host_dpdk_combined_cpus %> > > get_host_dpdk_combined_cpus_num_list: > action: tripleo.derive_params.convert_range_to_number_list > input: > range_list: <% $.host_dpdk_combined_cpus %> > publish: > host_dpdk_combined_cpus: <% task().result %> > reserved_cpus: <% task().result.split(',') %> > on-success: get_nova_cpus > on-error: set_status_failed_get_host_dpdk_combined_cpus_num_list > > get_nova_cpus: > publish: > nova_cpus: <% let(reserved_cpus => $.reserved_cpus) -> $.cpus.select($.thread_siblings).flatten().where(not (str($) in $reserved_cpus)).join(',') %> > on-success: > - get_isol_cpus: <% $.nova_cpus %> > - set_status_failed_get_nova_cpus: <% not $.nova_cpus %> > > # concatinates OvsPmdCoreList range format and NovaVcpuPinSet in range format. it may not be in perfect range format. > # example: concatinates '12-15,19' and 16-18' ranges '12-15,19,16-18' > get_isol_cpus: > publish: > isol_cpus: <% let(pmd_cpus => $.role_derive_params.get('OvsPmdCoreList','')) -> switch($pmd_cpus => concat($pmd_cpus, ',', $.nova_cpus), not $pmd_cpus => $.nova_cpus) %> > on-success: get_isol_cpus_num_list > > # Gets the isol_cpus in the number list > # example: '12-15,19,16-18' into '12,13,14,15,16,17,18,19' > get_isol_cpus_num_list: > action: tripleo.derive_params.convert_range_to_number_list > input: > range_list: <% $.isol_cpus %> > publish: > isol_cpus: <% task().result %> > on-success: get_nova_cpus_range_list > on-error: set_status_failed_get_isol_cpus_num_list > > get_nova_cpus_range_list: > action: tripleo.derive_params.convert_number_to_range_list > input: > num_list: <% $.nova_cpus %> > publish: > nova_cpus: <% task().result %> > on-success: get_isol_cpus_range_list > on-error: set_status_failed_get_nova_cpus_range_list > > # converts number format isol_cpus into range format > # example: '12,13,14,15,16,17,18,19' into '12-19' > get_isol_cpus_range_list: > action: tripleo.derive_params.convert_number_to_range_list > input: > num_list: <% $.isol_cpus %> > publish: > isol_cpus: <% task().result %> > on-success: get_host_mem > on-error: set_status_failed_get_isol_cpus_range_list > > get_host_mem: > publish: > host_mem: <% $.user_inputs.get('host_mem_default', 4096) %> > on-success: check_default_hugepage_supported > > check_default_hugepage_supported: > publish: > default_hugepage_supported: <% $.hw_data.get('inventory', {}).get('cpu', {}).get('flags', []).contains('pdpe1gb') %> > on-success: > - get_total_memory: <% $.default_hugepage_supported %> > - set_status_failed_check_default_hugepage_supported: <% not $.default_hugepage_supported %> > > get_total_memory: > publish: > total_memory: <% $.hw_data.get('inventory', {}).get('memory', {}).get('physical_mb', 0) %> > on-success: > - get_hugepage_allocation_percentage: <% $.total_memory %> > - set_status_failed_get_total_memory: <% not $.total_memory %> > > get_hugepage_allocation_percentage: > publish: > huge_page_allocation_percentage: <% $.user_inputs.get('huge_page_allocation_percentage', 0) %> > on-success: > - get_hugepages: <% isInteger($.huge_page_allocation_percentage) and $.huge_page_allocation_percentage > 0 %> > - set_status_failed_get_hugepage_allocation_percentage_invalid: <% not isInteger($.huge_page_allocation_percentage) %> > - set_status_failed_get_hugepage_allocation_percentage_not_provided: <% $.huge_page_allocation_percentage = 0 %> > > get_hugepages: > publish: > hugepages: <% let(huge_page_perc => float($.huge_page_allocation_percentage)/100)-> int((($.total_memory/1024)-4) * $huge_page_perc) %> > on-success: > - get_cpu_model: <% $.hugepages %> > - set_status_failed_get_hugepages: <% not $.hugepages %> > > get_cpu_model: > publish: > intel_cpu_model: <% $.hw_data.get('inventory', {}).get('cpu', {}).get('model_name', '').startsWith('Intel') %> > on-success: get_iommu_info > > get_iommu_info: > publish: > iommu_info: <% switch($.intel_cpu_model => 'intel_iommu=on iommu=pt', not $.intel_cpu_model => '') %> > on-success: get_kernel_args > > get_kernel_args: > publish: > kernel_args: <% concat('default_hugepagesz=1GB hugepagesz=1G ', 'hugepages=', str($.hugepages), ' ', $.iommu_info, ' isolcpus=', $.isol_cpus) %> > on-success: get_host_parameters > > get_host_parameters: > publish: > host_parameters: <% dict(concat($.role_name, 'Parameters') => dict('NovaVcpuPinSet' => $.get('nova_cpus', ''), 'NovaReservedHostMemory' => $.get('host_mem', ''), 'KernelArgs' => $.get('kernel_args', ''), 'IsolCpusList' => $.get('isol_cpus', ''))) %> > > set_status_failed_get_cpus: > publish: > status: FAILED > message: "Unable to determine CPU's on NUMA nodes" > on-success: fail > > set_status_failed_get_host_dpdk_combined_cpus: > publish: > status: FAILED > message: 'Unable to combine host and dpdk cpus list' > on-success: fail > > set_status_failed_get_host_dpdk_combined_cpus_num_list: > publish: > status: FAILED > message: <% task(get_host_dpdk_combined_cpus_num_list).result %> > on-success: fail > > set_status_failed_get_nova_cpus: > publish: > status: FAILED > message: 'Unable to determine nova vcpu pin set' > on-success: fail > > set_status_failed_get_nova_cpus_range_list: > publish: > status: FAILED > message: <% task(get_nova_cpus_range_list).result %> > on-success: fail > > set_status_failed_get_isol_cpus_num_list: > publish: > status: FAILED > message: <% task(get_isol_cpus_num_list).result %> > on-success: fail > > set_status_failed_get_isol_cpus_range_list: > publish: > status: FAILED > message: <% task(get_isol_cpus_range_list).result %> > on-success: fail > > set_status_failed_check_default_hugepage_supported: > publish: > status: FAILED > message: 'default huge page size 1GB is not supported' > on-success: fail > > set_status_failed_get_total_memory: > publish: > status: FAILED > message: 'Unable to determine total memory' > on-success: fail > > set_status_failed_get_hugepage_allocation_percentage_invalid: > publish: > status: FAILED > message: <% "huge_page_allocation_percentage user input '{0}' is invalid".format($.huge_page_allocation_percentage) %> > on-success: fail > > set_status_failed_get_hugepage_allocation_percentage_not_provided: > publish: > status: FAILED > message: 'huge_page_allocation_percentage user input is not provided' > on-success: fail > > set_status_failed_get_hugepages: > publish: > status: FAILED > message: 'Unable to determine huge pages' > on-success: fail > > > hci_derive_params: > description: Derive the deployment parameters for HCI > input: > - role_name > - environment_parameters > - heat_resource_tree > - introspection_data > - user_inputs > - derived_parameters: {} > > output: > derived_parameters: <% $.derived_parameters.mergeWith($.get('hci_parameters', {})) %> > > tags: > - tripleo-common-managed > > tasks: > get_hci_inputs: > publish: > hci_profile: <% $.user_inputs.get('hci_profile', '') %> > hci_profile_config: <% $.user_inputs.get('hci_profile_config', {}) %> > MB_PER_GB: 1024 > on-success: > - get_average_guest_memory_size_in_mb: <% $.hci_profile and $.hci_profile_config.get($.hci_profile, {}) %> > - set_failed_invalid_hci_profile: <% $.hci_profile and not $.hci_profile_config.get($.hci_profile, {}) %> > # When no hci_profile is specified, the workflow terminates without deriving any HCI parameters. > > get_average_guest_memory_size_in_mb: > publish: > average_guest_memory_size_in_mb: <% $.hci_profile_config.get($.hci_profile, {}).get('average_guest_memory_size_in_mb', 0) %> > on-success: > - get_average_guest_cpu_utilization_percentage: <% isInteger($.average_guest_memory_size_in_mb) %> > - set_failed_invalid_average_guest_memory_size_in_mb: <% not isInteger($.average_guest_memory_size_in_mb) %> > > get_average_guest_cpu_utilization_percentage: > publish: > average_guest_cpu_utilization_percentage: <% $.hci_profile_config.get($.hci_profile, {}).get('average_guest_cpu_utilization_percentage', 0) %> > on-success: > - get_gb_overhead_per_guest: <% isInteger($.average_guest_cpu_utilization_percentage) %> > - set_failed_invalid_average_guest_cpu_utilization_percentage: <% not isInteger($.average_guest_cpu_utilization_percentage) %> > > get_gb_overhead_per_guest: > publish: > gb_overhead_per_guest: <% $.user_inputs.get('gb_overhead_per_guest', 0.5) %> > on-success: > - get_gb_per_osd: <% isNumber($.gb_overhead_per_guest) %> > - set_failed_invalid_gb_overhead_per_guest: <% not isNumber($.gb_overhead_per_guest) %> > > get_gb_per_osd: > publish: > gb_per_osd: <% $.user_inputs.get('gb_per_osd', 3) %> > on-success: > - get_cores_per_osd: <% isNumber($.gb_per_osd) %> > - set_failed_invalid_gb_per_osd: <% not isNumber($.gb_per_osd) %> > > get_cores_per_osd: > publish: > cores_per_osd: <% $.user_inputs.get('cores_per_osd', 1.0) %> > on-success: > - get_extra_configs: <% isNumber($.cores_per_osd) %> > - set_failed_invalid_cores_per_osd: <% not isNumber($.cores_per_osd) %> > > get_extra_configs: > publish: > extra_config: <% $.environment_parameters.get('ExtraConfig', {}) %> > role_extra_config: <% $.environment_parameters.get(concat($.role_name, 'ExtraConfig'), {}) %> > role_env_params: <% $.environment_parameters.get(concat($.role_name, 'Parameters'), {}) %> > role_derive_params: <% $.derived_parameters.get(concat($.role_name, 'Parameters'), {}) %> > on-success: get_num_osds > > get_num_osds: > publish: > num_osds: <% $.heat_resource_tree.parameters.get('CephAnsibleDisksConfig', {}).get('default', {}).get('devices', []).count() %> > on-success: > - get_memory_mb: <% $.num_osds %> > # If there's no CephAnsibleDisksConfig then look for OSD configuration in hiera data > - get_num_osds_from_hiera: <% not $.num_osds %> > > get_num_osds_from_hiera: > publish: > num_osds: <% $.role_extra_config.get('ceph::profile::params::osds', $.extra_config.get('ceph::profile::params::osds', {})).keys().count() %> > on-success: > - get_memory_mb: <% $.num_osds %> > - set_failed_no_osds: <% not $.num_osds %> > > get_memory_mb: > publish: > memory_mb: <% $.introspection_data.get('memory_mb', 0) %> > on-success: > - get_nova_vcpu_pin_set: <% $.memory_mb %> > - set_failed_get_memory_mb: <% not $.memory_mb %> > > # Determine the number of CPU cores available to Nova and Ceph. If > # NovaVcpuPinSet is defined then use the number of vCPUs in the set, > # otherwise use all of the cores identified in the introspection data. > > get_nova_vcpu_pin_set: > publish: > # NovaVcpuPinSet can be defined in multiple locations, and it's > # important to select the value in order of precedence: > # > # 1) User specified value for this role > # 2) User specified default value for all roles > # 3) Value derived by another derived parameters workflow > nova_vcpu_pin_set: <% $.role_env_params.get('NovaVcpuPinSet', $.environment_parameters.get('NovaVcpuPinSet', $.role_derive_params.get('NovaVcpuPinSet', ''))) %> > on-success: > - get_nova_vcpu_count: <% $.nova_vcpu_pin_set %> > - get_num_cores: <% not $.nova_vcpu_pin_set %> > > get_nova_vcpu_count: > action: tripleo.derive_params.convert_range_to_number_list > input: > range_list: <% $.nova_vcpu_pin_set %> > publish: > num_cores: <% task().result.split(',').count() %> > on-success: calculate_nova_parameters > on-error: set_failed_get_nova_vcpu_count > > get_num_cores: > publish: > num_cores: <% $.introspection_data.get('cpus', 0) %> > on-success: > - calculate_nova_parameters: <% $.num_cores %> > - set_failed_get_num_cores: <% not $.num_cores %> > > # HCI calculations are broken into multiple steps. This is necessary > # because variables published by a Mistral task are not available > # for use by that same task. Variables computed and published in a task > # are only available in subsequent tasks. > # > # The HCI calculations compute two Nova parameters: > # - reserved_host_memory > # - cpu_allocation_ratio > # > # The reserved_host_memory calculation computes the amount of memory > # that needs to be reserved for Ceph and the total amount of "guest > # overhead" memory that is based on the anticipated number of guests. > # Psuedo-code for the calculation (disregarding MB and GB units) is > # as follows: > # > # ceph_memory = mem_per_osd * num_osds > # nova_memory = total_memory - ceph_memory > # num_guests = nova_memory / > # (average_guest_memory_size + overhead_per_guest) > # reserved_memory = ceph_memory + (num_guests * overhead_per_guest) > # > # The cpu_allocation_ratio calculation is similar in that it takes into > # account the number of cores that must be reserved for Ceph. > # > # ceph_cores = cores_per_osd * num_osds > # guest_cores = num_cores - ceph_cores > # guest_vcpus = guest_cores / average_guest_utilization > # cpu_allocation_ratio = guest_vcpus / num_cores > > calculate_nova_parameters: > publish: > avg_guest_util: <% $.average_guest_cpu_utilization_percentage / 100.0 %> > avg_guest_size_gb: <% $.average_guest_memory_size_in_mb / float($.MB_PER_GB) %> > memory_gb: <% $.memory_mb / float($.MB_PER_GB) %> > ceph_mem_gb: <% $.gb_per_osd * $.num_osds %> > nonceph_cores: <% $.num_cores - int($.cores_per_osd * $.num_osds) %> > on-success: calc_step_2 > > calc_step_2: > publish: > num_guests: <% int(($.memory_gb - $.ceph_mem_gb) / ($.avg_guest_size_gb + $.gb_overhead_per_guest)) %> > guest_vcpus: <% $.nonceph_cores / $.avg_guest_util %> > on-success: calc_step_3 > > calc_step_3: > publish: > reserved_host_memory: <% $.MB_PER_GB * int($.ceph_mem_gb + ($.num_guests * $.gb_overhead_per_guest)) %> > cpu_allocation_ratio: <% $.guest_vcpus / $.num_cores %> > on-success: validate_results > > validate_results: > publish: > # Verify whether HCI is viable: > # - At least 80% of the memory is reserved for Ceph and guest overhead > # - At least half of the CPU cores must be available to Nova > mem_ok: <% $.reserved_host_memory <= ($.memory_mb * 0.8) %> > cpu_ok: <% $.cpu_allocation_ratio >= 0.5 %> > on-success: > - set_failed_insufficient_mem: <% not $.mem_ok %> > - set_failed_insufficient_cpu: <% not $.cpu_ok %> > - publish_hci_parameters: <% $.mem_ok and $.cpu_ok %> > > publish_hci_parameters: > publish: > # TODO(abishop): Update this when the cpu_allocation_ratio can be set > # via a THT parameter (no such parameter currently exists). Until a > # THT parameter exists, use hiera data to set the cpu_allocation_ratio. > hci_parameters: <% dict(concat($.role_name, 'Parameters') => dict('NovaReservedHostMemory' => $.reserved_host_memory)) + dict(concat($.role_name, 'ExtraConfig') => dict('nova::cpu_allocation_ratio' => $.cpu_allocation_ratio)) %> > > set_failed_invalid_hci_profile: > publish: > message: "'<% $.hci_profile %>' is not a valid HCI profile." > on-success: fail > > set_failed_invalid_average_guest_memory_size_in_mb: > publish: > message: "'<% $.average_guest_memory_size_in_mb %>' is not a valid average_guest_memory_size_in_mb value." > on-success: fail > > set_failed_invalid_gb_overhead_per_guest: > publish: > message: "'<% $.gb_overhead_per_guest %>' is not a valid gb_overhead_per_guest value." > on-success: fail > > set_failed_invalid_gb_per_osd: > publish: > message: "'<% $.gb_per_osd %>' is not a valid gb_per_osd value." > on-success: fail > > set_failed_invalid_cores_per_osd: > publish: > message: "'<% $.cores_per_osd %>' is not a valid cores_per_osd value." > on-success: fail > > set_failed_invalid_average_guest_cpu_utilization_percentage: > publish: > message: "'<% $.average_guest_cpu_utilization_percentage %>' is not a valid average_guest_cpu_utilization_percentage value." > on-success: fail > > set_failed_no_osds: > publish: > message: "No Ceph OSDs found in the overcloud definition ('ceph::profile::params::osds')." > on-success: fail > > set_failed_get_memory_mb: > publish: > message: "Unable to determine the amount of physical memory (no 'memory_mb' found in introspection_data)." > on-success: fail > > set_failed_get_nova_vcpu_count: > publish: > message: <% task(get_nova_vcpu_count).result %> > on-success: fail > > set_failed_get_num_cores: > publish: > message: "Unable to determine the number of CPU cores (no 'cpus' found in introspection_data)." > on-success: fail > > set_failed_insufficient_mem: > publish: > message: "<% $.memory_mb %> MB is not enough memory to run hyperconverged." > on-success: fail > > set_failed_insufficient_cpu: > publish: > message: "<% $.num_cores %> CPU cores are not enough to run hyperconverged." > on-success: fail >' >2018-06-26 11:15:14,148 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 32010 >2018-06-26 11:15:14,189 DEBUG: RESP: [201] Content-Length: 32010 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:14 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.derive_params_formulas.v1\ndescription: TripleO Workflows to derive deployment parameters from the introspected data\n\nworkflows:\n\n\n dpdk_derive_params:\n description: >\n Workflow to derive parameters for DPDK service.\n input:\n - plan\n - role_name\n - hw_data # introspection data\n - user_inputs\n - derived_parameters: {}\n\n output:\n derived_parameters: <% $.derived_parameters.mergeWith($.get('dpdk_parameters', {})) %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_network_config:\n action: tripleo.parameters.get_network_config\n input:\n container: <% $.plan %>\n role_name: <% $.role_name %>\n publish:\n network_configs: <% task().result.get('network_config', []) %>\n on-success: get_dpdk_nics_numa_info\n on-error: set_status_failed_get_network_config\n\n get_dpdk_nics_numa_info:\n action: tripleo.derive_params.get_dpdk_nics_numa_info\n input:\n network_configs: <% $.network_configs %>\n inspect_data: <% $.hw_data %>\n publish:\n dpdk_nics_numa_info: <% task().result %>\n on-success:\n # TODO: Need to remove condtions here\n # adding condition and throw error in action for empty check\n - get_dpdk_nics_numa_nodes: <% $.dpdk_nics_numa_info %>\n - set_status_failed_get_dpdk_nics_numa_info: <% not $.dpdk_nics_numa_info %>\n on-error: set_status_failed_on_error_get_dpdk_nics_numa_info\n\n get_dpdk_nics_numa_nodes:\n publish:\n dpdk_nics_numa_nodes: <% $.dpdk_nics_numa_info.groupBy($.numa_node).select($[0]).orderBy($) %>\n on-success:\n - get_numa_nodes: <% $.dpdk_nics_numa_nodes %>\n - set_status_failed_get_dpdk_nics_numa_nodes: <% not $.dpdk_nics_numa_nodes %>\n\n get_numa_nodes:\n publish:\n numa_nodes: <% $.hw_data.numa_topology.ram.select($.numa_node).orderBy($) %>\n on-success:\n - get_num_phy_cores_per_numa_for_pmd: <% $.numa_nodes %>\n - set_status_failed_get_numa_nodes: <% not $.numa_nodes %>\n\n get_num_phy_cores_per_numa_for_pmd:\n publish:\n num_phy_cores_per_numa_node_for_pmd: <% $.user_inputs.get('num_phy_cores_per_numa_node_for_pmd', 0) %>\n on-success:\n - get_num_cores_per_numa_nodes: <% isInteger($.num_phy_cores_per_numa_node_for_pmd) and $.num_phy_cores_per_numa_node_for_pmd > 0 %>\n - set_status_failed_get_num_phy_cores_per_numa_for_pmd_invalid: <% not isInteger($.num_phy_cores_per_numa_node_for_pmd) %>\n - set_status_failed_get_num_phy_cores_per_numa_for_pmd_not_provided: <% $.num_phy_cores_per_numa_node_for_pmd = 0 %>\n\n # For NUMA node with DPDK nic, number of cores should be used from user input\n # For NUMA node without DPDK nic, number of cores should be 1\n get_num_cores_per_numa_nodes:\n publish:\n num_cores_per_numa_nodes: <% let(dpdk_nics_nodes => $.dpdk_nics_numa_nodes, cores => $.num_phy_cores_per_numa_node_for_pmd) -> $.numa_nodes.select(switch($ in $dpdk_nics_nodes => $cores, not $ in $dpdk_nics_nodes => 1)) %>\n on-success: get_pmd_cpus\n\n get_pmd_cpus:\n action: tripleo.derive_params.get_dpdk_core_list\n input:\n inspect_data: <% $.hw_data %>\n numa_nodes_cores_count: <% $.num_cores_per_numa_nodes %>\n publish:\n pmd_cpus: <% task().result %>\n on-success:\n - get_pmd_cpus_range_list: <% $.pmd_cpus %>\n - set_status_failed_get_pmd_cpus: <% not $.pmd_cpus %>\n on-error: set_status_failed_on_error_get_pmd_cpus\n\n get_pmd_cpus_range_list:\n action: tripleo.derive_params.convert_number_to_range_list\n input:\n num_list: <% $.pmd_cpus %>\n publish:\n pmd_cpus: <% task().result %>\n on-success: get_host_cpus\n on-error: set_status_failed_get_pmd_cpus_range_list\n\n get_host_cpus:\n workflow: tripleo.derive_params_formulas.v1.get_host_cpus\n input:\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n publish:\n host_cpus: <% task().result.get('host_cpus', '') %>\n on-success: get_sock_mem\n on-error: set_status_failed_get_host_cpus\n\n get_sock_mem:\n action: tripleo.derive_params.get_dpdk_socket_memory\n input:\n dpdk_nics_numa_info: <% $.dpdk_nics_numa_info %>\n numa_nodes: <% $.numa_nodes %>\n overhead: <% $.user_inputs.get('overhead', 800) %>\n packet_size_in_buffer: <% 4096*64 %>\n publish:\n sock_mem: <% task().result %>\n on-success:\n - get_dpdk_parameters: <% $.sock_mem %>\n - set_status_failed_get_sock_mem: <% not $.sock_mem %>\n on-error: set_status_failed_on_error_get_sock_mem\n\n get_dpdk_parameters:\n publish:\n dpdk_parameters: <% dict(concat($.role_name, 'Parameters') => dict('OvsPmdCoreList' => $.get('pmd_cpus', ''), 'OvsDpdkCoreList' => $.get('host_cpus', ''), 'OvsDpdkSocketMemory' => $.get('sock_mem', ''))) %>\n\n set_status_failed_get_network_config:\n publish:\n status: FAILED\n message: <% task(get_network_config).result %>\n on-success: fail\n\n set_status_failed_get_dpdk_nics_numa_info:\n publish:\n status: FAILED\n message: \"Unable to determine DPDK NIC's NUMA information\"\n on-success: fail\n\n set_status_failed_on_error_get_dpdk_nics_numa_info:\n publish:\n status: FAILED\n message: <% task(get_dpdk_nics_numa_info).result %>\n on-success: fail\n\n set_status_failed_get_dpdk_nics_numa_nodes:\n publish:\n status: FAILED\n message: \"Unable to determine DPDK NIC's numa nodes\"\n on-success: fail\n\n set_status_failed_get_numa_nodes:\n publish:\n status: FAILED\n message: 'Unable to determine available NUMA nodes'\n on-success: fail\n\n set_status_failed_get_num_phy_cores_per_numa_for_pmd_invalid:\n publish:\n status: FAILED\n message: <% \"num_phy_cores_per_numa_node_for_pmd user input '{0}' is invalid\".format($.num_phy_cores_per_numa_node_for_pmd) %>\n on-success: fail\n\n set_status_failed_get_num_phy_cores_per_numa_for_pmd_not_provided:\n publish:\n status: FAILED\n message: 'num_phy_cores_per_numa_node_for_pmd user input is not provided'\n on-success: fail\n\n set_status_failed_get_pmd_cpus:\n publish:\n status: FAILED\n message: 'Unable to determine OvsPmdCoreList parameter'\n on-success: fail\n\n set_status_failed_on_error_get_pmd_cpus:\n publish:\n status: FAILED\n message: <% task(get_pmd_cpus).result %>\n on-success: fail\n\n set_status_failed_get_pmd_cpus_range_list:\n publish:\n status: FAILED\n message: <% task(get_pmd_cpus_range_list).result %>\n on-success: fail\n\n set_status_failed_get_host_cpus:\n publish:\n status: FAILED\n message: <% task(get_host_cpus).result.get('message', '') %>\n on-success: fail\n\n set_status_failed_get_sock_mem:\n publish:\n status: FAILED\n message: 'Unable to determine OvsDpdkSocketMemory parameter'\n on-success: fail\n\n set_status_failed_on_error_get_sock_mem:\n publish:\n status: FAILED\n message: <% task(get_sock_mem).result %>\n on-success: fail\n\n\n sriov_derive_params:\n description: >\n This workflow derives parameters for the SRIOV feature.\n\n input:\n - role_name\n - hw_data # introspection data\n - derived_parameters: {}\n\n output:\n derived_parameters: <% $.derived_parameters.mergeWith($.get('sriov_parameters', {})) %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_host_cpus:\n workflow: tripleo.derive_params_formulas.v1.get_host_cpus\n input:\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n publish:\n host_cpus: <% task().result.get('host_cpus', '') %>\n on-success: get_sriov_parameters\n on-error: set_status_failed_get_host_cpus\n\n get_sriov_parameters:\n publish:\n # SriovHostCpusList parameter is added temporarily and it's removed later from derived parameters result.\n sriov_parameters: <% dict(concat($.role_name, 'Parameters') => dict('SriovHostCpusList' => $.get('host_cpus', ''))) %>\n\n set_status_failed_get_host_cpus:\n publish:\n status: FAILED\n message: <% task(get_host_cpus).result.get('message', '') %>\n on-success: fail\n\n\n get_host_cpus:\n description: >\n Fetching the host CPU list from the introspection data, and then converting the raw list into a range list.\n\n input:\n - hw_data # introspection data\n\n output:\n host_cpus: <% $.get('host_cpus', '') %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_host_cpus:\n action: tripleo.derive_params.get_host_cpus_list inspect_data=<% $.hw_data %>\n publish:\n host_cpus: <% task().result %>\n on-success:\n - get_host_cpus_range_list: <% $.host_cpus %>\n - set_status_failed_get_host_cpus: <% not $.host_cpus %>\n on-error: set_status_failed_on_error_get_host_cpus\n\n get_host_cpus_range_list:\n action: tripleo.derive_params.convert_number_to_range_list\n input:\n num_list: <% $.host_cpus %>\n publish:\n host_cpus: <% task().result %>\n on-error: set_status_failed_get_host_cpus_range_list\n\n set_status_failed_get_host_cpus:\n publish:\n status: FAILED\n message: 'Unable to determine host cpus'\n on-success: fail\n\n set_status_failed_on_error_get_host_cpus:\n publish:\n status: FAILED\n message: <% task(get_host_cpus).result %>\n on-success: fail\n\n set_status_failed_get_host_cpus_range_list:\n publish:\n status: FAILED\n message: <% task(get_host_cpus_range_list).result %>\n on-success: fail\n\n\n host_derive_params:\n description: >\n This workflow derives parameters for the Host process, and is mainly associated with CPU pinning and huge memory pages.\n This workflow can be dependent on any feature or also can be invoked individually as well.\n\n input:\n - role_name\n - hw_data # introspection data\n - user_inputs\n - derived_parameters: {}\n\n output:\n derived_parameters: <% $.derived_parameters.mergeWith($.get('host_parameters', {})) %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_cpus:\n publish:\n cpus: <% $.hw_data.numa_topology.cpus %>\n on-success:\n - get_role_derive_params: <% $.cpus %>\n - set_status_failed_get_cpus: <% not $.cpus %>\n\n get_role_derive_params:\n publish:\n role_derive_params: <% $.derived_parameters.get(concat($.role_name, 'Parameters'), {}) %>\n # removing the role parameters (eg. ComputeParameters) in derived_parameters dictionary since already copied in role_derive_params.\n derived_parameters: <% $.derived_parameters.delete(concat($.role_name, 'Parameters')) %>\n on-success: get_host_cpus\n\n get_host_cpus:\n publish:\n host_cpus: <% $.role_derive_params.get('OvsDpdkCoreList', '') or $.role_derive_params.get('SriovHostCpusList', '') %>\n # SriovHostCpusList parameter is added temporarily for host_cpus and not needed in derived_parameters result.\n # SriovHostCpusList parameter is deleted in derived_parameters list and adding the updated role parameters\n # back in the derived_parameters.\n derived_parameters: <% $.derived_parameters + dict(concat($.role_name, 'Parameters') => $.role_derive_params.delete('SriovHostCpusList')) %>\n on-success: get_host_dpdk_combined_cpus\n\n get_host_dpdk_combined_cpus:\n publish:\n host_dpdk_combined_cpus: <% let(pmd_cpus => $.role_derive_params.get('OvsPmdCoreList', '')) -> switch($pmd_cpus => concat($pmd_cpus, ',', $.host_cpus), not $pmd_cpus => $.host_cpus) %>\n reserved_cpus: []\n on-success:\n - get_host_dpdk_combined_cpus_num_list: <% $.host_dpdk_combined_cpus %>\n - set_status_failed_get_host_dpdk_combined_cpus: <% not $.host_dpdk_combined_cpus %>\n\n get_host_dpdk_combined_cpus_num_list:\n action: tripleo.derive_params.convert_range_to_number_list\n input:\n range_list: <% $.host_dpdk_combined_cpus %>\n publish:\n host_dpdk_combined_cpus: <% task().result %>\n reserved_cpus: <% task().result.split(',') %>\n on-success: get_nova_cpus\n on-error: set_status_failed_get_host_dpdk_combined_cpus_num_list\n\n get_nova_cpus:\n publish:\n nova_cpus: <% let(reserved_cpus => $.reserved_cpus) -> $.cpus.select($.thread_siblings).flatten().where(not (str($) in $reserved_cpus)).join(',') %>\n on-success:\n - get_isol_cpus: <% $.nova_cpus %>\n - set_status_failed_get_nova_cpus: <% not $.nova_cpus %>\n\n # concatinates OvsPmdCoreList range format and NovaVcpuPinSet in range format. it may not be in perfect range format.\n # example: concatinates '12-15,19' and 16-18' ranges '12-15,19,16-18'\n get_isol_cpus:\n publish:\n isol_cpus: <% let(pmd_cpus => $.role_derive_params.get('OvsPmdCoreList','')) -> switch($pmd_cpus => concat($pmd_cpus, ',', $.nova_cpus), not $pmd_cpus => $.nova_cpus) %>\n on-success: get_isol_cpus_num_list\n\n # Gets the isol_cpus in the number list\n # example: '12-15,19,16-18' into '12,13,14,15,16,17,18,19'\n get_isol_cpus_num_list:\n action: tripleo.derive_params.convert_range_to_number_list\n input:\n range_list: <% $.isol_cpus %>\n publish:\n isol_cpus: <% task().result %>\n on-success: get_nova_cpus_range_list\n on-error: set_status_failed_get_isol_cpus_num_list\n\n get_nova_cpus_range_list:\n action: tripleo.derive_params.convert_number_to_range_list\n input:\n num_list: <% $.nova_cpus %>\n publish:\n nova_cpus: <% task().result %>\n on-success: get_isol_cpus_range_list\n on-error: set_status_failed_get_nova_cpus_range_list\n\n # converts number format isol_cpus into range format\n # example: '12,13,14,15,16,17,18,19' into '12-19'\n get_isol_cpus_range_list:\n action: tripleo.derive_params.convert_number_to_range_list\n input:\n num_list: <% $.isol_cpus %>\n publish:\n isol_cpus: <% task().result %>\n on-success: get_host_mem\n on-error: set_status_failed_get_isol_cpus_range_list\n\n get_host_mem:\n publish:\n host_mem: <% $.user_inputs.get('host_mem_default', 4096) %>\n on-success: check_default_hugepage_supported\n\n check_default_hugepage_supported:\n publish:\n default_hugepage_supported: <% $.hw_data.get('inventory', {}).get('cpu', {}).get('flags', []).contains('pdpe1gb') %>\n on-success:\n - get_total_memory: <% $.default_hugepage_supported %>\n - set_status_failed_check_default_hugepage_supported: <% not $.default_hugepage_supported %>\n\n get_total_memory:\n publish:\n total_memory: <% $.hw_data.get('inventory', {}).get('memory', {}).get('physical_mb', 0) %>\n on-success:\n - get_hugepage_allocation_percentage: <% $.total_memory %>\n - set_status_failed_get_total_memory: <% not $.total_memory %>\n\n get_hugepage_allocation_percentage:\n publish:\n huge_page_allocation_percentage: <% $.user_inputs.get('huge_page_allocation_percentage', 0) %>\n on-success:\n - get_hugepages: <% isInteger($.huge_page_allocation_percentage) and $.huge_page_allocation_percentage > 0 %>\n - set_status_failed_get_hugepage_allocation_percentage_invalid: <% not isInteger($.huge_page_allocation_percentage) %>\n - set_status_failed_get_hugepage_allocation_percentage_not_provided: <% $.huge_page_allocation_percentage = 0 %>\n\n get_hugepages:\n publish:\n hugepages: <% let(huge_page_perc => float($.huge_page_allocation_percentage)/100)-> int((($.total_memory/1024)-4) * $huge_page_perc) %>\n on-success:\n - get_cpu_model: <% $.hugepages %>\n - set_status_failed_get_hugepages: <% not $.hugepages %>\n\n get_cpu_model:\n publish:\n intel_cpu_model: <% $.hw_data.get('inventory', {}).get('cpu', {}).get('model_name', '').startsWith('Intel') %>\n on-success: get_iommu_info\n\n get_iommu_info:\n publish:\n iommu_info: <% switch($.intel_cpu_model => 'intel_iommu=on iommu=pt', not $.intel_cpu_model => '') %>\n on-success: get_kernel_args\n\n get_kernel_args:\n publish:\n kernel_args: <% concat('default_hugepagesz=1GB hugepagesz=1G ', 'hugepages=', str($.hugepages), ' ', $.iommu_info, ' isolcpus=', $.isol_cpus) %>\n on-success: get_host_parameters\n\n get_host_parameters:\n publish:\n host_parameters: <% dict(concat($.role_name, 'Parameters') => dict('NovaVcpuPinSet' => $.get('nova_cpus', ''), 'NovaReservedHostMemory' => $.get('host_mem', ''), 'KernelArgs' => $.get('kernel_args', ''), 'IsolCpusList' => $.get('isol_cpus', ''))) %>\n\n set_status_failed_get_cpus:\n publish:\n status: FAILED\n message: \"Unable to determine CPU's on NUMA nodes\"\n on-success: fail\n\n set_status_failed_get_host_dpdk_combined_cpus:\n publish:\n status: FAILED\n message: 'Unable to combine host and dpdk cpus list'\n on-success: fail\n\n set_status_failed_get_host_dpdk_combined_cpus_num_list:\n publish:\n status: FAILED\n message: <% task(get_host_dpdk_combined_cpus_num_list).result %>\n on-success: fail\n\n set_status_failed_get_nova_cpus:\n publish:\n status: FAILED\n message: 'Unable to determine nova vcpu pin set'\n on-success: fail\n\n set_status_failed_get_nova_cpus_range_list:\n publish:\n status: FAILED\n message: <% task(get_nova_cpus_range_list).result %>\n on-success: fail\n\n set_status_failed_get_isol_cpus_num_list:\n publish:\n status: FAILED\n message: <% task(get_isol_cpus_num_list).result %>\n on-success: fail\n\n set_status_failed_get_isol_cpus_range_list:\n publish:\n status: FAILED\n message: <% task(get_isol_cpus_range_list).result %>\n on-success: fail\n\n set_status_failed_check_default_hugepage_supported:\n publish:\n status: FAILED\n message: 'default huge page size 1GB is not supported'\n on-success: fail\n\n set_status_failed_get_total_memory:\n publish:\n status: FAILED\n message: 'Unable to determine total memory'\n on-success: fail\n\n set_status_failed_get_hugepage_allocation_percentage_invalid:\n publish:\n status: FAILED\n message: <% \"huge_page_allocation_percentage user input '{0}' is invalid\".format($.huge_page_allocation_percentage) %>\n on-success: fail\n\n set_status_failed_get_hugepage_allocation_percentage_not_provided:\n publish:\n status: FAILED\n message: 'huge_page_allocation_percentage user input is not provided'\n on-success: fail\n\n set_status_failed_get_hugepages:\n publish:\n status: FAILED\n message: 'Unable to determine huge pages'\n on-success: fail\n\n\n hci_derive_params:\n description: Derive the deployment parameters for HCI\n input:\n - role_name\n - environment_parameters\n - heat_resource_tree\n - introspection_data\n - user_inputs\n - derived_parameters: {}\n\n output:\n derived_parameters: <% $.derived_parameters.mergeWith($.get('hci_parameters', {})) %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_hci_inputs:\n publish:\n hci_profile: <% $.user_inputs.get('hci_profile', '') %>\n hci_profile_config: <% $.user_inputs.get('hci_profile_config', {}) %>\n MB_PER_GB: 1024\n on-success:\n - get_average_guest_memory_size_in_mb: <% $.hci_profile and $.hci_profile_config.get($.hci_profile, {}) %>\n - set_failed_invalid_hci_profile: <% $.hci_profile and not $.hci_profile_config.get($.hci_profile, {}) %>\n # When no hci_profile is specified, the workflow terminates without deriving any HCI parameters.\n\n get_average_guest_memory_size_in_mb:\n publish:\n average_guest_memory_size_in_mb: <% $.hci_profile_config.get($.hci_profile, {}).get('average_guest_memory_size_in_mb', 0) %>\n on-success:\n - get_average_guest_cpu_utilization_percentage: <% isInteger($.average_guest_memory_size_in_mb) %>\n - set_failed_invalid_average_guest_memory_size_in_mb: <% not isInteger($.average_guest_memory_size_in_mb) %>\n\n get_average_guest_cpu_utilization_percentage:\n publish:\n average_guest_cpu_utilization_percentage: <% $.hci_profile_config.get($.hci_profile, {}).get('average_guest_cpu_utilization_percentage', 0) %>\n on-success:\n - get_gb_overhead_per_guest: <% isInteger($.average_guest_cpu_utilization_percentage) %>\n - set_failed_invalid_average_guest_cpu_utilization_percentage: <% not isInteger($.average_guest_cpu_utilization_percentage) %>\n\n get_gb_overhead_per_guest:\n publish:\n gb_overhead_per_guest: <% $.user_inputs.get('gb_overhead_per_guest', 0.5) %>\n on-success:\n - get_gb_per_osd: <% isNumber($.gb_overhead_per_guest) %>\n - set_failed_invalid_gb_overhead_per_guest: <% not isNumber($.gb_overhead_per_guest) %>\n\n get_gb_per_osd:\n publish:\n gb_per_osd: <% $.user_inputs.get('gb_per_osd', 3) %>\n on-success:\n - get_cores_per_osd: <% isNumber($.gb_per_osd) %>\n - set_failed_invalid_gb_per_osd: <% not isNumber($.gb_per_osd) %>\n\n get_cores_per_osd:\n publish:\n cores_per_osd: <% $.user_inputs.get('cores_per_osd', 1.0) %>\n on-success:\n - get_extra_configs: <% isNumber($.cores_per_osd) %>\n - set_failed_invalid_cores_per_osd: <% not isNumber($.cores_per_osd) %>\n\n get_extra_configs:\n publish:\n extra_config: <% $.environment_parameters.get('ExtraConfig', {}) %>\n role_extra_config: <% $.environment_parameters.get(concat($.role_name, 'ExtraConfig'), {}) %>\n role_env_params: <% $.environment_parameters.get(concat($.role_name, 'Parameters'), {}) %>\n role_derive_params: <% $.derived_parameters.get(concat($.role_name, 'Parameters'), {}) %>\n on-success: get_num_osds\n\n get_num_osds:\n publish:\n num_osds: <% $.heat_resource_tree.parameters.get('CephAnsibleDisksConfig', {}).get('default', {}).get('devices', []).count() %>\n on-success:\n - get_memory_mb: <% $.num_osds %>\n # If there's no CephAnsibleDisksConfig then look for OSD configuration in hiera data\n - get_num_osds_from_hiera: <% not $.num_osds %>\n\n get_num_osds_from_hiera:\n publish:\n num_osds: <% $.role_extra_config.get('ceph::profile::params::osds', $.extra_config.get('ceph::profile::params::osds', {})).keys().count() %>\n on-success:\n - get_memory_mb: <% $.num_osds %>\n - set_failed_no_osds: <% not $.num_osds %>\n\n get_memory_mb:\n publish:\n memory_mb: <% $.introspection_data.get('memory_mb', 0) %>\n on-success:\n - get_nova_vcpu_pin_set: <% $.memory_mb %>\n - set_failed_get_memory_mb: <% not $.memory_mb %>\n\n # Determine the number of CPU cores available to Nova and Ceph. If\n # NovaVcpuPinSet is defined then use the number of vCPUs in the set,\n # otherwise use all of the cores identified in the introspection data.\n\n get_nova_vcpu_pin_set:\n publish:\n # NovaVcpuPinSet can be defined in multiple locations, and it's\n # important to select the value in order of precedence:\n #\n # 1) User specified value for this role\n # 2) User specified default value for all roles\n # 3) Value derived by another derived parameters workflow\n nova_vcpu_pin_set: <% $.role_env_params.get('NovaVcpuPinSet', $.environment_parameters.get('NovaVcpuPinSet', $.role_derive_params.get('NovaVcpuPinSet', ''))) %>\n on-success:\n - get_nova_vcpu_count: <% $.nova_vcpu_pin_set %>\n - get_num_cores: <% not $.nova_vcpu_pin_set %>\n\n get_nova_vcpu_count:\n action: tripleo.derive_params.convert_range_to_number_list\n input:\n range_list: <% $.nova_vcpu_pin_set %>\n publish:\n num_cores: <% task().result.split(',').count() %>\n on-success: calculate_nova_parameters\n on-error: set_failed_get_nova_vcpu_count\n\n get_num_cores:\n publish:\n num_cores: <% $.introspection_data.get('cpus', 0) %>\n on-success:\n - calculate_nova_parameters: <% $.num_cores %>\n - set_failed_get_num_cores: <% not $.num_cores %>\n\n # HCI calculations are broken into multiple steps. This is necessary\n # because variables published by a Mistral task are not available\n # for use by that same task. Variables computed and published in a task\n # are only available in subsequent tasks.\n #\n # The HCI calculations compute two Nova parameters:\n # - reserved_host_memory\n # - cpu_allocation_ratio\n #\n # The reserved_host_memory calculation computes the amount of memory\n # that needs to be reserved for Ceph and the total amount of \"guest\n # overhead\" memory that is based on the anticipated number of guests.\n # Psuedo-code for the calculation (disregarding MB and GB units) is\n # as follows:\n #\n # ceph_memory = mem_per_osd * num_osds\n # nova_memory = total_memory - ceph_memory\n # num_guests = nova_memory /\n # (average_guest_memory_size + overhead_per_guest)\n # reserved_memory = ceph_memory + (num_guests * overhead_per_guest)\n #\n # The cpu_allocation_ratio calculation is similar in that it takes into\n # account the number of cores that must be reserved for Ceph.\n #\n # ceph_cores = cores_per_osd * num_osds\n # guest_cores = num_cores - ceph_cores\n # guest_vcpus = guest_cores / average_guest_utilization\n # cpu_allocation_ratio = guest_vcpus / num_cores\n\n calculate_nova_parameters:\n publish:\n avg_guest_util: <% $.average_guest_cpu_utilization_percentage / 100.0 %>\n avg_guest_size_gb: <% $.average_guest_memory_size_in_mb / float($.MB_PER_GB) %>\n memory_gb: <% $.memory_mb / float($.MB_PER_GB) %>\n ceph_mem_gb: <% $.gb_per_osd * $.num_osds %>\n nonceph_cores: <% $.num_cores - int($.cores_per_osd * $.num_osds) %>\n on-success: calc_step_2\n\n calc_step_2:\n publish:\n num_guests: <% int(($.memory_gb - $.ceph_mem_gb) / ($.avg_guest_size_gb + $.gb_overhead_per_guest)) %>\n guest_vcpus: <% $.nonceph_cores / $.avg_guest_util %>\n on-success: calc_step_3\n\n calc_step_3:\n publish:\n reserved_host_memory: <% $.MB_PER_GB * int($.ceph_mem_gb + ($.num_guests * $.gb_overhead_per_guest)) %>\n cpu_allocation_ratio: <% $.guest_vcpus / $.num_cores %>\n on-success: validate_results\n\n validate_results:\n publish:\n # Verify whether HCI is viable:\n # - At least 80% of the memory is reserved for Ceph and guest overhead\n # - At least half of the CPU cores must be available to Nova\n mem_ok: <% $.reserved_host_memory <= ($.memory_mb * 0.8) %>\n cpu_ok: <% $.cpu_allocation_ratio >= 0.5 %>\n on-success:\n - set_failed_insufficient_mem: <% not $.mem_ok %>\n - set_failed_insufficient_cpu: <% not $.cpu_ok %>\n - publish_hci_parameters: <% $.mem_ok and $.cpu_ok %>\n\n publish_hci_parameters:\n publish:\n # TODO(abishop): Update this when the cpu_allocation_ratio can be set\n # via a THT parameter (no such parameter currently exists). Until a\n # THT parameter exists, use hiera data to set the cpu_allocation_ratio.\n hci_parameters: <% dict(concat($.role_name, 'Parameters') => dict('NovaReservedHostMemory' => $.reserved_host_memory)) + dict(concat($.role_name, 'ExtraConfig') => dict('nova::cpu_allocation_ratio' => $.cpu_allocation_ratio)) %>\n\n set_failed_invalid_hci_profile:\n publish:\n message: \"'<% $.hci_profile %>' is not a valid HCI profile.\"\n on-success: fail\n\n set_failed_invalid_average_guest_memory_size_in_mb:\n publish:\n message: \"'<% $.average_guest_memory_size_in_mb %>' is not a valid average_guest_memory_size_in_mb value.\"\n on-success: fail\n\n set_failed_invalid_gb_overhead_per_guest:\n publish:\n message: \"'<% $.gb_overhead_per_guest %>' is not a valid gb_overhead_per_guest value.\"\n on-success: fail\n\n set_failed_invalid_gb_per_osd:\n publish:\n message: \"'<% $.gb_per_osd %>' is not a valid gb_per_osd value.\"\n on-success: fail\n\n set_failed_invalid_cores_per_osd:\n publish:\n message: \"'<% $.cores_per_osd %>' is not a valid cores_per_osd value.\"\n on-success: fail\n\n set_failed_invalid_average_guest_cpu_utilization_percentage:\n publish:\n message: \"'<% $.average_guest_cpu_utilization_percentage %>' is not a valid average_guest_cpu_utilization_percentage value.\"\n on-success: fail\n\n set_failed_no_osds:\n publish:\n message: \"No Ceph OSDs found in the overcloud definition ('ceph::profile::params::osds').\"\n on-success: fail\n\n set_failed_get_memory_mb:\n publish:\n message: \"Unable to determine the amount of physical memory (no 'memory_mb' found in introspection_data).\"\n on-success: fail\n\n set_failed_get_nova_vcpu_count:\n publish:\n message: <% task(get_nova_vcpu_count).result %>\n on-success: fail\n\n set_failed_get_num_cores:\n publish:\n message: \"Unable to determine the number of CPU cores (no 'cpus' found in introspection_data).\"\n on-success: fail\n\n set_failed_insufficient_mem:\n publish:\n message: \"<% $.memory_mb %> MB is not enough memory to run hyperconverged.\"\n on-success: fail\n\n set_failed_insufficient_cpu:\n publish:\n message: \"<% $.num_cores %> CPU cores are not enough to run hyperconverged.\"\n on-success: fail\n", "name": "tripleo.derive_params_formulas.v1", "tags": [], "created_at": "2018-06-26 05:45:14", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "eb6af00e-4866-4ccc-82d4-a865ff50fed7"} > >2018-06-26 11:15:14,189 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:14,191 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.plan_management.v1 >description: TripleO Overcloud Deployment Workflows v1 > >workflows: > > create_default_deployment_plan: > description: > > This workflow exists to maintain backwards compatibility in pike. This > workflow will likely be removed in queens in favor of create_deployment_plan. > input: > - container > - queue_name: tripleo > - generate_passwords: true > tags: > - tripleo-common-managed > tasks: > call_create_deployment_plan: > workflow: tripleo.plan_management.v1.create_deployment_plan > on-success: set_status_success > on-error: call_create_deployment_plan_set_status_failed > input: > container: <% $.container %> > queue_name: <% $.queue_name %> > generate_passwords: <% $.generate_passwords %> > use_default_templates: true > > set_status_success: > on-success: notify_zaqar > publish: > status: SUCCESS > message: <% task(call_create_deployment_plan).result %> > > call_create_deployment_plan_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(call_create_deployment_plan).result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.create_default_deployment_plan > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > create_deployment_plan: > description: > > This workflow provides the capability to create a deployment plan using > the default heat templates provided in a standard TripleO undercloud > deployment, heat templates contained in an external git repository, or a > swift container that already contains templates. > input: > - container > - source_url: null > - queue_name: tripleo > - generate_passwords: true > - use_default_templates: false > > tags: > - tripleo-common-managed > > tasks: > container_required_check: > description: > > If using the default templates or importing templates from a git > repository, a new container needs to be created. If using an existing > container containing templates, skip straight to create_plan. > on-success: > - verify_container_doesnt_exist: <% $.use_default_templates or $.source_url %> > - create_plan: <% $.use_default_templates = false and $.source_url = null %> > > verify_container_doesnt_exist: > action: swift.head_container container=<% $.container %> > on-success: notify_zaqar > on-error: create_container > publish: > status: FAILED > message: "Unable to create plan. The Swift container already exists" > > create_container: > action: tripleo.plan.create_container container=<% $.container %> > on-success: templates_source_check > on-error: create_container_set_status_failed > > cleanup_temporary_files: > action: tripleo.git.clean container=<% $.container %> > > templates_source_check: > on-success: > - upload_default_templates: <% $.use_default_templates = true %> > - clone_git_repo: <% $.source_url != null %> > > clone_git_repo: > action: tripleo.git.clone container=<% $.container %> url=<% $.source_url %> > on-success: upload_templates_directory > on-error: clone_git_repo_set_status_failed > > upload_templates_directory: > action: tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %> > on-success: create_plan > on-complete: cleanup_temporary_files > on-error: upload_templates_directory_set_status_failed > > upload_default_templates: > action: tripleo.templates.upload container=<% $.container %> > on-success: create_plan > on-error: upload_to_container_set_status_failed > > create_plan: > on-success: > - ensure_passwords_exist: <% $.generate_passwords = true %> > - add_root_stack_name: <% $.generate_passwords != true %> > > ensure_passwords_exist: > action: tripleo.parameters.generate_passwords container=<% $.container %> > on-success: add_root_stack_name > on-error: ensure_passwords_exist_set_status_failed > > add_root_stack_name: > action: tripleo.parameters.update > input: > container: <% $.container %> > parameters: > RootStackName: <% $.container %> > on-success: container_images_prepare > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > container_images_prepare: > description: > > Populate all container image parameters with default values. > action: tripleo.container_images.prepare container=<% $.container %> > on-success: process_templates > on-error: container_images_prepare_set_status_failed > > process_templates: > action: tripleo.templates.process container=<% $.container %> > on-success: set_status_success > on-error: process_templates_set_status_failed > > set_status_success: > on-success: notify_zaqar > publish: > status: SUCCESS > message: 'Plan created.' > > create_container_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(create_container).result %> > > clone_git_repo_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(clone_git_repo).result %> > > upload_templates_directory_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(upload_templates_directory).result %> > > upload_to_container_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(upload_default_templates).result %> > > ensure_passwords_exist_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(ensure_passwords_exist).result %> > > process_templates_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(process_templates).result %> > > container_images_prepare_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(container_images_prepare).result %> > > notify_zaqar: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.create_deployment_plan > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > update_deployment_plan: > input: > - container > - source_url: null > - queue_name: tripleo > - generate_passwords: true > - plan_environment: null > tags: > - tripleo-common-managed > tasks: > templates_source_check: > on-success: > - update_plan: <% $.source_url = null %> > - clone_git_repo: <% $.source_url != null %> > > clone_git_repo: > action: tripleo.git.clone container=<% $.container %> url=<% $.source_url %> > on-success: upload_templates_directory > on-error: clone_git_repo_set_status_failed > > upload_templates_directory: > action: tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %> > on-success: create_swift_rings_backup_plan > on-complete: cleanup_temporary_files > on-error: upload_templates_directory_set_status_failed > > cleanup_temporary_files: > action: tripleo.git.clean container=<% $.container %> > > create_swift_rings_backup_plan: > workflow: tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan > on-success: update_plan > on-error: create_swift_rings_backup_plan_set_status_failed > input: > container: <% $.container %> > queue_name: <% $.queue_name %> > use_default_templates: true > > update_plan: > on-success: > - ensure_passwords_exist: <% $.generate_passwords = true %> > - container_images_prepare: <% $.generate_passwords != true %> > > ensure_passwords_exist: > action: tripleo.parameters.generate_passwords container=<% $.container %> > on-success: container_images_prepare > on-error: ensure_passwords_exist_set_status_failed > > container_images_prepare: > description: > > Populate all container image parameters with default values. > action: tripleo.container_images.prepare container=<% $.container %> > on-success: process_templates > on-error: container_images_prepare_set_status_failed > > process_templates: > action: tripleo.templates.process container=<% $.container %> > on-success: > - set_status_success: <% $.plan_environment = null %> > - upload_plan_environment: <% $.plan_environment != null %> > on-error: process_templates_set_status_failed > > upload_plan_environment: > action: tripleo.templates.upload_plan_environment container=<% $.container %> plan_environment=<% $.plan_environment %> > on-success: set_status_success > on-error: process_templates_set_status_failed > > set_status_success: > on-success: notify_zaqar > publish: > status: SUCCESS > message: 'Plan updated.' > > create_swift_rings_backup_plan_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(create_swift_rings_backup_plan).result %> > > clone_git_repo_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(clone_git_repo).result %> > > upload_templates_directory_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(upload_templates_directory).result %> > > process_templates_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(process_templates).result %> > > ensure_passwords_exist_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(ensure_passwords_exist).result %> > > container_images_prepare_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(container_images_prepare).result %> > > notify_zaqar: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.update_deployment_plan > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > delete_deployment_plan: > description: > > Deletes a plan by deleting the container matching plan_name. It will > not delete the plan if a stack exists with the same name. > > tags: > - tripleo-common-managed > > input: > - container: overcloud > - queue_name: tripleo > > tasks: > delete_plan: > action: tripleo.plan.delete container=<% $.container %> > on-complete: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > publish: > status: SUCCESS > message: <% task().result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.delete_deployment_plan > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > > get_passwords: > description: Retrieves passwords for a given plan > input: > - container > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > verify_container_exists: > action: swift.head_container container=<% $.container %> > on-success: get_environment_passwords > on-error: verify_container_set_status_failed > > get_environment_passwords: > action: tripleo.parameters.get_passwords container=<% $.container %> > on-success: get_passwords_set_status_success > on-error: get_passwords_set_status_failed > > get_passwords_set_status_success: > on-success: notify_zaqar > publish: > status: SUCCESS > message: <% task(get_environment_passwords).result %> > > get_passwords_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(get_environment_passwords).result %> > > verify_container_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(verify_container_exists).result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.get_passwords > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > export_deployment_plan: > description: Creates an export tarball for a given plan > input: > - plan > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > export_plan: > action: tripleo.plan.export > input: > plan: <% $.plan %> > delete_after: 3600 > exports_container: "plan-exports" > on-success: create_tempurl > on-error: export_plan_set_status_failed > > create_tempurl: > action: tripleo.swift.tempurl > on-success: set_status_success > on-error: create_tempurl_set_status_failed > input: > container: "plan-exports" > obj: "<% $.plan %>.tar.gz" > valid: 3600 > > set_status_success: > on-success: notify_zaqar > publish: > status: SUCCESS > message: <% task(create_tempurl).result %> > tempurl: <% task(create_tempurl).result %> > > export_plan_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(export_plan).result %> > > create_tempurl_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(create_tempurl).result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.export_deployment_plan > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > tempurl: <% $.get('tempurl', '') %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > get_deprecated_parameters: > description: Gets the list of deprecated parameters in the whole of the plan including nested stack > input: > - container: overcloud > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > get_flatten_data: > action: tripleo.parameters.get_flatten container=<% $.container %> > on-success: get_deprecated_params > on-error: set_status_failed_get_flatten_data > publish: > user_params: <% task().result.environment_parameters %> > plan_params: <% task().result.heat_resource_tree.parameters.keys() %> > parameter_groups: <% task().result.heat_resource_tree.resources.values().where( $.get('parameter_groups') ).select($.parameter_groups).flatten() %> > > get_deprecated_params: > on-success: check_if_user_param_has_deprecated > publish: > deprecated_params: <% $.parameter_groups.where($.get('label') = 'deprecated').select($.parameters).flatten().distinct() %> > > check_if_user_param_has_deprecated: > on-success: get_unused_params > publish: > deprecated_result: <% let(up => $.user_params) -> $.deprecated_params.select( dict('parameter' => $, 'deprecated' => true, 'user_defined' => $up.keys().contains($)) ) %> > > # Get the list of parameters, which are defined by user via environment files's parameter_default, but not part of the plan definition > # It may be possible that the parameter will be used by a service, but the service is not part of the plan. > # In such cases, the parameter will be reported as unused, care should be take to understand whether it is really unused or not. > get_unused_params: > on-success: send_message > publish: > unused_params: <% let(plan_params => $.plan_params) -> $.user_params.keys().where( not $plan_params.contains($) ) %> > > set_status_failed_get_flatten_data: > on-success: send_message > publish: > status: FAILED > message: <% task(get_flatten_data).result %> > > send_message: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.get_deprecated_parameters > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > deprecated: <% $.get('deprecated_result', []) %> > unused: <% $.get('unused_params', []) %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > publish_ui_logs_to_swift: > description: > > This workflow drains a zaqar queue, and publish its messages into a log > file in swift. This workflow is called by cron trigger. > > input: > - logging_queue_name: tripleo-ui-logging > - logging_container: tripleo-ui-logs > > tags: > - tripleo-common-managed > > tasks: > > # We're using a NoOp action to start the workflow. The recursive nature > # of the workflow means that Mistral will refuse to execute it because it > # doesn't know where to begin. > start: > on-success: get_messages > > get_messages: > action: zaqar.claim_messages > on-success: > - format_messages: <% task().result.len() > 0 %> > input: > queue_name: <% $.logging_queue_name %> > ttl: 60 > grace: 60 > publish: > status: SUCCESS > messages: <% task().result %> > message_ids: <% task().result.select($._id) %> > > format_messages: > action: tripleo.logging_to_swift.format_messages > on-success: upload_to_swift > input: > messages: <% $.messages %> > publish: > status: SUCCESS > formatted_messages: <% task().result %> > > upload_to_swift: > action: tripleo.logging_to_swift.publish_ui_log_to_swift > on-success: delete_messages > input: > logging_data: <% $.formatted_messages %> > logging_container: <% $.logging_container %> > publish: > status: SUCCESS > > delete_messages: > action: zaqar.delete_messages > on-success: get_messages > input: > queue_name: <% $.logging_queue_name %> > messages: <% $.message_ids %> > publish: > status: SUCCESS > > download_logs: > description: Creates a tarball with logging data > input: > - queue_name: tripleo > - logging_container: "tripleo-ui-logs" > - downloads_container: "tripleo-ui-logs-downloads" > - delete_after: 3600 > > tags: > - tripleo-common-managed > > tasks: > > publish_logs: > workflow: tripleo.plan_management.v1.publish_ui_logs_to_swift > on-success: prepare_log_download > on-error: publish_logs_set_status_failed > > prepare_log_download: > action: tripleo.logging_to_swift.prepare_log_download > input: > logging_container: <% $.logging_container %> > downloads_container: <% $.downloads_container %> > delete_after: <% $.delete_after %> > on-success: create_tempurl > on-error: download_logs_set_status_failed > publish: > filename: <% task().result %> > > create_tempurl: > action: tripleo.swift.tempurl > on-success: set_status_success > on-error: create_tempurl_set_status_failed > input: > container: <% $.downloads_container %> > obj: <% $.filename %> > valid: 3600 > publish: > tempurl: <% task().result %> > > set_status_success: > on-success: notify_zaqar > publish: > status: SUCCESS > message: <% task(create_tempurl).result %> > tempurl: <% task(create_tempurl).result %> > > publish_logs_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(publish_logs).result %> > > download_logs_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(prepare_log_download).result %> > > create_tempurl_set_status_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(create_tempurl).result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.download_logs > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > tempurl: <% $.get('tempurl', '') %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > list_roles: > description: Retrieve the roles_data.yaml and return a usable object > > input: > - container: overcloud > - roles_data_file: 'roles_data.yaml' > - queue_name: tripleo > > output: > roles_data: <% $.roles_data %> > > tags: > - tripleo-common-managed > > tasks: > get_roles_data: > action: swift.get_object > input: > container: <% $.container %> > obj: <% $.roles_data_file %> > publish: > roles_data: <% yaml_parse(task().result.last()) %> > status: SUCCESS > on-success: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.list_roles > payload: > status: <% $.status %> > roles_data: <% $.get('roles_data', {}) %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > list_available_networks: > input: > - container > - queue_name: tripleo > > output: > available_networks: <% $.available_networks %> > > tags: > - tripleo-common-managed > > tasks: > get_network_file_names: > action: swift.get_container > input: > container: <% $.container %> > publish: > network_names: <% task().result[1].where($.name.startsWith('networks/')).where($.name.endsWith('.yaml')).name %> > on-success: get_network_files > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > get_network_files: > with-items: network_name in <% $.network_names %> > action: swift.get_object > on-success: transform_output > on-error: notify_zaqar > input: > container: <% $.container %> > obj: <% $.network_name %> > publish: > status: SUCCESS > available_yaml_networks: <% task().result.select($[1]) %> > publish-on-error: > status: FAILED > message: <% task().result %> > > transform_output: > publish: > status: SUCCESS > available_networks: <% yaml_parse($.available_yaml_networks.join("\n")) %> > publish-on-error: > status: FAILED > message: <% task().result %> > on-complete: notify_zaqar > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.list_available_networks > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > available_networks: <% $.get('available_networks', []) %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > list_networks: > input: > - container: 'overcloud' > - network_data_file: 'network_data.yaml' > - queue_name: tripleo > > output: > network_data: <% $.network_data %> > > tags: > - tripleo-common-managed > > tasks: > get_networks: > action: swift.get_object > input: > container: <% $.container %> > obj: <% $.network_data_file %> > on-success: notify_zaqar > publish: > network_data: <% yaml_parse(task().result.last()) %> > status: SUCCESS > message: <% task().result %> > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > notify_zaqar: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.list_networks > payload: > status: <% $.status %> > network_data: <% $.get('network_data', {}) %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > validate_network_files: > description: Validate network files exist > input: > - container: overcloud > - network_data > - queue_name: tripleo > > output: > network_data: <% $.network_data %> > > tags: > - tripleo-common-managed > > tasks: > get_network_names: > publish: > network_names_lower: <% $.network_data.where($.containsKey('name_lower')).name_lower %> > network_names: <% $.network_data.where(not $.containsKey('name_lower')).name %> > on-success: validate_networks > > validate_networks: > with-items: network in <% $.network_names_lower.concat($.network_names) %> > action: swift.head_object > input: > container: <% $.container %> > obj: network/<% $.network.toLower() %>.yaml > publish: > status: SUCCESS > message: <% task().result %> > on-success: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > notify_zaqar: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.validate_network_files > payload: > status: <% $.status %> > message: <% $.message %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > validate_networks: > description: Validate network files were generated properly and exist > input: > - container: 'overcloud' > - network_data_file: 'network_data.yaml' > - queue_name: tripleo > > output: > network_data: <% $.network_data %> > > tags: > - tripleo-common-managed > > tasks: > get_network_data: > workflow: list_networks > input: > container: <% $.container %> > network_data_file: <% $.network_data_file %> > queue_name: <% $.queue_name %> > publish: > network_data: <% task().result.network_data %> > on-success: validate_networks > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: > notify_zaqar > > validate_networks: > workflow: validate_network_files > input: > container: <% $.container %> > network_data: <% $.network_data %> > queue_name: <% $.queue_name %> > publish: > status: SUCCESS > message: <% task().result %> > on-success: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.validate_networks > payload: > status: <% $.status %> > network_data: <% $.get('network_data', {}) %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > validate_roles: > description: Vaildate roles data exists and is parsable > > input: > - container: overcloud > - roles_data_file: 'roles_data.yaml' > - queue_name: tripleo > > output: > roles_data: <% $.roles_data %> > > tags: > - tripleo-common-managed > > tasks: > get_roles_data: > workflow: list_roles > input: > container: <% $.container %> > roles_data_file: <% $.roles_data_file %> > queue_name: <% $.queue_name %> > publish: > roles_data: <% task().result.roles_data %> > status: SUCCESS > on-success: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: > notify_zaqar > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.validate_networks > payload: > status: <% $.status %> > roles_data: <% $.get('roles_data', '') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > _validate_networks_from_roles: > description: Internal workflow for validating a network exists from a role > > input: > - container: overcloud > - defined_networks > - networks_in_roles > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > validate_network_in_network_data: > publish: > networks_found: <% $.networks_in_roles.toSet().intersect($.defined_networks.toSet()) %> > networks_not_found: <% $.networks_in_roles.toSet().difference($.defined_networks.toSet()) %> > on-success: > - network_not_found: <% $.networks_not_found %> > - notify_zaqar: <% not $.networks_not_found %> > > network_not_found: > publish: > message: <% "Some networks in roles are not defined, {0}".format($.networks_not_found.join(', ')) %> > status: FAILED > on-success: notify_zaqar > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1._validate_networks_from_role > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > validate_roles_and_networks: > description: Vaidate that roles and network data are valid > > input: > - container: overcloud > - roles_data_file: 'roles_data.yaml' > - network_data_file: 'network_data.yaml' > - queue_name: tripleo > > output: > roles_data: <% $.roles_data %> > network_data: <% $.network_data %> > > tags: > - tripleo-common-managed > > tasks: > validate_network_data: > workflow: validate_networks > input: > container: <% $.container %> > network_data_file: <% $.network_data_file %> > queue_name: <% $.queue_name %> > publish: > network_data: <% task().result.network_data %> > on-success: validate_roles_data > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > validate_roles_data: > workflow: validate_roles > input: > container: <% $.container %> > roles_data_file: <% $.roles_data_file %> > queue_name: <% $.queue_name %> > publish: > roles_data: <% task().result.roles_data %> > role_networks_data: <% task().result.roles_data.networks %> > networks_in_roles: <% task().result.roles_data.networks.flatten().distinct() %> > on-success: validate_roles_and_networks > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > validate_roles_and_networks: > workflow: _validate_networks_from_roles > input: > container: <% $.container %> > defined_networks: <% $.network_data.name %> > networks_in_roles: <% $.networks_in_roles %> > queue_name: <% $.queue_name %> > publish: > status: SUCCESS > on-success: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result.message %> > on-error: notify_zaqar > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.validate_roles_and_networks > payload: > status: <% $.status %> > roles_data: <% $.get('roles_data', {}) %> > network_data: <% $.get('network_data', {}) %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > list_available_roles: > input: > - container: overcloud > - queue_name: tripleo > > output: > available_roles: <% $.available_roles %> > > tags: > - tripleo-common-managed > > tasks: > get_role_file_names: > action: swift.get_container > input: > container: <% $.container %> > publish: > role_names: <% task().result[1].where($.name.startsWith('roles/')).where($.name.endsWith('.yaml')).name %> > on-success: get_role_files > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > get_role_files: > with-items: role_name in <% $.role_names %> > action: swift.get_object > on-success: transform_output > on-error: notify_zaqar > input: > container: <% $.container %> > obj: <% $.role_name %> > publish: > status: SUCCESS > available_yaml_roles: <% task().result.select($[1]) %> > publish-on-error: > status: FAILED > message: <% task().result %> > > transform_output: > publish: > status: SUCCESS > available_roles: <% yaml_parse($.available_yaml_roles.join("\n")) %> > publish-on-error: > status: FAILED > message: <% task().result %> > on-complete: notify_zaqar > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.list_available_roles > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > available_roles: <% $.get('available_roles', []) %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > update_roles: > description: > > takes data in json format validates its contents and persists them in > roles_data.yaml, after successful update, templates are regenerated. > input: > - container > - roles > - roles_data_file: 'roles_data.yaml' > - replace_all: false > - queue_name: tripleo > tags: > - tripleo-common-managed > tasks: > get_available_roles: > workflow: list_available_roles > input: > container: <% $.container %> > queue_name: <% $.queue_name%> > publish: > available_roles: <% task().result.available_roles %> > on-success: validate_input > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > validate_input: > description: > > validate the format of input (verify that each role in input has the > required attributes set. check README in roles directory in t-h-t), > validate that roles in input exist in roles directory in t-h-t > action: tripleo.plan.validate_roles > input: > container: <% $.container %> > roles: <% $.roles %> > available_roles: <% $.available_roles %> > on-success: get_network_data > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > get_network_data: > workflow: list_networks > input: > container: <% $.container %> > queue_name: <% $.queue_name %> > publish: > network_data: <% task().result.network_data %> > on-success: validate_network_names > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > validate_network_names: > description: > > validate that Network names assigned to Role exist in > network-data.yaml object in Swift container > workflow: _validate_networks_from_roles > input: > container: <% $.container %> > defined_networks: <% $.network_data.name %> > networks_in_roles: <% $.roles.networks.flatten().distinct() %> > queue_name: <% $.queue_name %> > on-success: get_current_roles > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result.message %> > > get_current_roles: > workflow: list_roles > input: > container: <% $.container %> > roles_data_file: <% $.roles_data_file %> > queue_name: <% $.queue_name %> > publish: > current_roles: <% task().result.roles_data %> > on-success: update_roles_data > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > update_roles_data: > description: > > update roles_data.yaml object in Swift with roles from workflow input > action: tripleo.plan.update_roles > input: > container: <% $.container %> > roles: <% $.roles %> > current_roles: <% $.current_roles %> > replace_all: <% $.replace_all %> > publish: > updated_roles_data: <% task().result.roles %> > on-success: update_roles_data_in_swift > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > update_roles_data_in_swift: > description: > > update roles_data.yaml object in Swift with data from workflow input > action: swift.put_object > input: > container: <% $.container %> > obj: <% $.roles_data_file %> > contents: <% yaml_dump($.updated_roles_data) %> > on-success: regenerate_templates > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > regenerate_templates: > action: tripleo.templates.process container=<% $.container %> > on-success: get_updated_roles > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > get_updated_roles: > workflow: list_roles > input: > container: <% $.container %> > roles_data_file: <% $.roles_data_file %> > publish: > updated_roles: <% task().result.roles_data %> > status: SUCCESS > on-complete: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.roles.v1.update_roles > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > updated_roles: <% $.get('updated_roles', []) %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > select_roles: > description: > > takes a list of role names as input and populates roles_data.yaml in > container in Swift with respective roles from 'roles directory' > input: > - container > - role_names > - roles_data_file: 'roles_data.yaml' > - replace_all: true > - queue_name: tripleo > tags: > - tripleo-common-managed > tasks: > > get_available_roles: > workflow: list_available_roles > input: > container: <% $.container %> > queue_name: <% $.queue_name %> > publish: > available_roles: <% task().result.available_roles %> > on-success: get_current_roles > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > get_current_roles: > workflow: list_roles > input: > container: <% $.container %> > roles_data_file: <% $.roles_data_file %> > queue_name: <% $.queue_name %> > publish: > current_roles: <% task().result.roles_data %> > on-success: gather_roles > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > gather_roles: > description: > > for each role name from the input, check if it exists in > roles_data.yaml, if yes, use that role definition, if not, get the > role definition from roles directory. Use the gathered roles > definitions as input to updateRolesWorkflow - this ensures > configuration of the roles which are already in roles_data.yaml > will not get overridden by data from roles directory > action: tripleo.plan.gather_roles > input: > role_names: <% $.role_names %> > current_roles: <% $.current_roles %> > available_roles: <% $.available_roles %> > publish: > gathered_roles: <% task().result.gathered_roles %> > on-success: call_update_roles_workflow > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > call_update_roles_workflow: > workflow: update_roles > input: > container: <% $.container %> > roles: <% $.gathered_roles %> > roles_data_file: <% $.roles_data_file %> > replace_all: <% $.replace_all %> > queue_name: <% $.queue_name %> > on-complete: notify_zaqar > publish: > selected_roles: <% task().result.updated_roles %> > status: SUCCESS > publish-on-error: > status: FAILED > message: <% task().result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.plan_management.v1.select_roles > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > selected_roles: <% $.get('selected_roles', []) %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 11:15:17,044 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 47190 >2018-06-26 11:15:17,085 DEBUG: RESP: [201] Content-Length: 47190 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:17 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.plan_management.v1\ndescription: TripleO Overcloud Deployment Workflows v1\n\nworkflows:\n\n create_default_deployment_plan:\n description: >\n This workflow exists to maintain backwards compatibility in pike. This\n workflow will likely be removed in queens in favor of create_deployment_plan.\n input:\n - container\n - queue_name: tripleo\n - generate_passwords: true\n tags:\n - tripleo-common-managed\n tasks:\n call_create_deployment_plan:\n workflow: tripleo.plan_management.v1.create_deployment_plan\n on-success: set_status_success\n on-error: call_create_deployment_plan_set_status_failed\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n generate_passwords: <% $.generate_passwords %>\n use_default_templates: true\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(call_create_deployment_plan).result %>\n\n call_create_deployment_plan_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(call_create_deployment_plan).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.create_default_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n create_deployment_plan:\n description: >\n This workflow provides the capability to create a deployment plan using\n the default heat templates provided in a standard TripleO undercloud\n deployment, heat templates contained in an external git repository, or a\n swift container that already contains templates.\n input:\n - container\n - source_url: null\n - queue_name: tripleo\n - generate_passwords: true\n - use_default_templates: false\n\n tags:\n - tripleo-common-managed\n\n tasks:\n container_required_check:\n description: >\n If using the default templates or importing templates from a git\n repository, a new container needs to be created. If using an existing\n container containing templates, skip straight to create_plan.\n on-success:\n - verify_container_doesnt_exist: <% $.use_default_templates or $.source_url %>\n - create_plan: <% $.use_default_templates = false and $.source_url = null %>\n\n verify_container_doesnt_exist:\n action: swift.head_container container=<% $.container %>\n on-success: notify_zaqar\n on-error: create_container\n publish:\n status: FAILED\n message: \"Unable to create plan. The Swift container already exists\"\n\n create_container:\n action: tripleo.plan.create_container container=<% $.container %>\n on-success: templates_source_check\n on-error: create_container_set_status_failed\n\n cleanup_temporary_files:\n action: tripleo.git.clean container=<% $.container %>\n\n templates_source_check:\n on-success:\n - upload_default_templates: <% $.use_default_templates = true %>\n - clone_git_repo: <% $.source_url != null %>\n\n clone_git_repo:\n action: tripleo.git.clone container=<% $.container %> url=<% $.source_url %>\n on-success: upload_templates_directory\n on-error: clone_git_repo_set_status_failed\n\n upload_templates_directory:\n action: tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>\n on-success: create_plan\n on-complete: cleanup_temporary_files\n on-error: upload_templates_directory_set_status_failed\n\n upload_default_templates:\n action: tripleo.templates.upload container=<% $.container %>\n on-success: create_plan\n on-error: upload_to_container_set_status_failed\n\n create_plan:\n on-success:\n - ensure_passwords_exist: <% $.generate_passwords = true %>\n - add_root_stack_name: <% $.generate_passwords != true %>\n\n ensure_passwords_exist:\n action: tripleo.parameters.generate_passwords container=<% $.container %>\n on-success: add_root_stack_name\n on-error: ensure_passwords_exist_set_status_failed\n\n add_root_stack_name:\n action: tripleo.parameters.update\n input:\n container: <% $.container %>\n parameters:\n RootStackName: <% $.container %>\n on-success: container_images_prepare\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n container_images_prepare:\n description: >\n Populate all container image parameters with default values.\n action: tripleo.container_images.prepare container=<% $.container %>\n on-success: process_templates\n on-error: container_images_prepare_set_status_failed\n\n process_templates:\n action: tripleo.templates.process container=<% $.container %>\n on-success: set_status_success\n on-error: process_templates_set_status_failed\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: 'Plan created.'\n\n create_container_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_container).result %>\n\n clone_git_repo_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(clone_git_repo).result %>\n\n upload_templates_directory_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(upload_templates_directory).result %>\n\n upload_to_container_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(upload_default_templates).result %>\n\n ensure_passwords_exist_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(ensure_passwords_exist).result %>\n\n process_templates_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(process_templates).result %>\n\n container_images_prepare_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(container_images_prepare).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.create_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n update_deployment_plan:\n input:\n - container\n - source_url: null\n - queue_name: tripleo\n - generate_passwords: true\n - plan_environment: null\n tags:\n - tripleo-common-managed\n tasks:\n templates_source_check:\n on-success:\n - update_plan: <% $.source_url = null %>\n - clone_git_repo: <% $.source_url != null %>\n\n clone_git_repo:\n action: tripleo.git.clone container=<% $.container %> url=<% $.source_url %>\n on-success: upload_templates_directory\n on-error: clone_git_repo_set_status_failed\n\n upload_templates_directory:\n action: tripleo.templates.upload container=<% $.container %> templates_path=<% task(clone_git_repo).result %>\n on-success: create_swift_rings_backup_plan\n on-complete: cleanup_temporary_files\n on-error: upload_templates_directory_set_status_failed\n\n cleanup_temporary_files:\n action: tripleo.git.clean container=<% $.container %>\n\n create_swift_rings_backup_plan:\n workflow: tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan\n on-success: update_plan\n on-error: create_swift_rings_backup_plan_set_status_failed\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n use_default_templates: true\n\n update_plan:\n on-success:\n - ensure_passwords_exist: <% $.generate_passwords = true %>\n - container_images_prepare: <% $.generate_passwords != true %>\n\n ensure_passwords_exist:\n action: tripleo.parameters.generate_passwords container=<% $.container %>\n on-success: container_images_prepare\n on-error: ensure_passwords_exist_set_status_failed\n\n container_images_prepare:\n description: >\n Populate all container image parameters with default values.\n action: tripleo.container_images.prepare container=<% $.container %>\n on-success: process_templates\n on-error: container_images_prepare_set_status_failed\n\n process_templates:\n action: tripleo.templates.process container=<% $.container %>\n on-success:\n - set_status_success: <% $.plan_environment = null %>\n - upload_plan_environment: <% $.plan_environment != null %>\n on-error: process_templates_set_status_failed\n\n upload_plan_environment:\n action: tripleo.templates.upload_plan_environment container=<% $.container %> plan_environment=<% $.plan_environment %>\n on-success: set_status_success\n on-error: process_templates_set_status_failed\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: 'Plan updated.'\n\n create_swift_rings_backup_plan_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_swift_rings_backup_plan).result %>\n\n clone_git_repo_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(clone_git_repo).result %>\n\n upload_templates_directory_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(upload_templates_directory).result %>\n\n process_templates_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(process_templates).result %>\n\n ensure_passwords_exist_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(ensure_passwords_exist).result %>\n\n container_images_prepare_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(container_images_prepare).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.update_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n delete_deployment_plan:\n description: >\n Deletes a plan by deleting the container matching plan_name. It will\n not delete the plan if a stack exists with the same name.\n\n tags:\n - tripleo-common-managed\n\n input:\n - container: overcloud\n - queue_name: tripleo\n\n tasks:\n delete_plan:\n action: tripleo.plan.delete container=<% $.container %>\n on-complete: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n publish:\n status: SUCCESS\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.delete_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n\n get_passwords:\n description: Retrieves passwords for a given plan\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n verify_container_exists:\n action: swift.head_container container=<% $.container %>\n on-success: get_environment_passwords\n on-error: verify_container_set_status_failed\n\n get_environment_passwords:\n action: tripleo.parameters.get_passwords container=<% $.container %>\n on-success: get_passwords_set_status_success\n on-error: get_passwords_set_status_failed\n\n get_passwords_set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(get_environment_passwords).result %>\n\n get_passwords_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(get_environment_passwords).result %>\n\n verify_container_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(verify_container_exists).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.get_passwords\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n export_deployment_plan:\n description: Creates an export tarball for a given plan\n input:\n - plan\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n export_plan:\n action: tripleo.plan.export\n input:\n plan: <% $.plan %>\n delete_after: 3600\n exports_container: \"plan-exports\"\n on-success: create_tempurl\n on-error: export_plan_set_status_failed\n\n create_tempurl:\n action: tripleo.swift.tempurl\n on-success: set_status_success\n on-error: create_tempurl_set_status_failed\n input:\n container: \"plan-exports\"\n obj: \"<% $.plan %>.tar.gz\"\n valid: 3600\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(create_tempurl).result %>\n tempurl: <% task(create_tempurl).result %>\n\n export_plan_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(export_plan).result %>\n\n create_tempurl_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_tempurl).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.export_deployment_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n tempurl: <% $.get('tempurl', '') %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n get_deprecated_parameters:\n description: Gets the list of deprecated parameters in the whole of the plan including nested stack\n input:\n - container: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_flatten_data:\n action: tripleo.parameters.get_flatten container=<% $.container %>\n on-success: get_deprecated_params\n on-error: set_status_failed_get_flatten_data\n publish:\n user_params: <% task().result.environment_parameters %>\n plan_params: <% task().result.heat_resource_tree.parameters.keys() %>\n parameter_groups: <% task().result.heat_resource_tree.resources.values().where( $.get('parameter_groups') ).select($.parameter_groups).flatten() %>\n\n get_deprecated_params:\n on-success: check_if_user_param_has_deprecated\n publish:\n deprecated_params: <% $.parameter_groups.where($.get('label') = 'deprecated').select($.parameters).flatten().distinct() %>\n\n check_if_user_param_has_deprecated:\n on-success: get_unused_params\n publish:\n deprecated_result: <% let(up => $.user_params) -> $.deprecated_params.select( dict('parameter' => $, 'deprecated' => true, 'user_defined' => $up.keys().contains($)) ) %>\n\n # Get the list of parameters, which are defined by user via environment files's parameter_default, but not part of the plan definition\n # It may be possible that the parameter will be used by a service, but the service is not part of the plan.\n # In such cases, the parameter will be reported as unused, care should be take to understand whether it is really unused or not.\n get_unused_params:\n on-success: send_message\n publish:\n unused_params: <% let(plan_params => $.plan_params) -> $.user_params.keys().where( not $plan_params.contains($) ) %>\n\n set_status_failed_get_flatten_data:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_flatten_data).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.get_deprecated_parameters\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n deprecated: <% $.get('deprecated_result', []) %>\n unused: <% $.get('unused_params', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n publish_ui_logs_to_swift:\n description: >\n This workflow drains a zaqar queue, and publish its messages into a log\n file in swift. This workflow is called by cron trigger.\n\n input:\n - logging_queue_name: tripleo-ui-logging\n - logging_container: tripleo-ui-logs\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n # We're using a NoOp action to start the workflow. The recursive nature\n # of the workflow means that Mistral will refuse to execute it because it\n # doesn't know where to begin.\n start:\n on-success: get_messages\n\n get_messages:\n action: zaqar.claim_messages\n on-success:\n - format_messages: <% task().result.len() > 0 %>\n input:\n queue_name: <% $.logging_queue_name %>\n ttl: 60\n grace: 60\n publish:\n status: SUCCESS\n messages: <% task().result %>\n message_ids: <% task().result.select($._id) %>\n\n format_messages:\n action: tripleo.logging_to_swift.format_messages\n on-success: upload_to_swift\n input:\n messages: <% $.messages %>\n publish:\n status: SUCCESS\n formatted_messages: <% task().result %>\n\n upload_to_swift:\n action: tripleo.logging_to_swift.publish_ui_log_to_swift\n on-success: delete_messages\n input:\n logging_data: <% $.formatted_messages %>\n logging_container: <% $.logging_container %>\n publish:\n status: SUCCESS\n\n delete_messages:\n action: zaqar.delete_messages\n on-success: get_messages\n input:\n queue_name: <% $.logging_queue_name %>\n messages: <% $.message_ids %>\n publish:\n status: SUCCESS\n\n download_logs:\n description: Creates a tarball with logging data\n input:\n - queue_name: tripleo\n - logging_container: \"tripleo-ui-logs\"\n - downloads_container: \"tripleo-ui-logs-downloads\"\n - delete_after: 3600\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n publish_logs:\n workflow: tripleo.plan_management.v1.publish_ui_logs_to_swift\n on-success: prepare_log_download\n on-error: publish_logs_set_status_failed\n\n prepare_log_download:\n action: tripleo.logging_to_swift.prepare_log_download\n input:\n logging_container: <% $.logging_container %>\n downloads_container: <% $.downloads_container %>\n delete_after: <% $.delete_after %>\n on-success: create_tempurl\n on-error: download_logs_set_status_failed\n publish:\n filename: <% task().result %>\n\n create_tempurl:\n action: tripleo.swift.tempurl\n on-success: set_status_success\n on-error: create_tempurl_set_status_failed\n input:\n container: <% $.downloads_container %>\n obj: <% $.filename %>\n valid: 3600\n publish:\n tempurl: <% task().result %>\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(create_tempurl).result %>\n tempurl: <% task(create_tempurl).result %>\n\n publish_logs_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(publish_logs).result %>\n\n download_logs_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(prepare_log_download).result %>\n\n create_tempurl_set_status_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_tempurl).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.download_logs\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n tempurl: <% $.get('tempurl', '') %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n list_roles:\n description: Retrieve the roles_data.yaml and return a usable object\n\n input:\n - container: overcloud\n - roles_data_file: 'roles_data.yaml'\n - queue_name: tripleo\n\n output:\n roles_data: <% $.roles_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_roles_data:\n action: swift.get_object\n input:\n container: <% $.container %>\n obj: <% $.roles_data_file %>\n publish:\n roles_data: <% yaml_parse(task().result.last()) %>\n status: SUCCESS\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.list_roles\n payload:\n status: <% $.status %>\n roles_data: <% $.get('roles_data', {}) %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n list_available_networks:\n input:\n - container\n - queue_name: tripleo\n\n output:\n available_networks: <% $.available_networks %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_network_file_names:\n action: swift.get_container\n input:\n container: <% $.container %>\n publish:\n network_names: <% task().result[1].where($.name.startsWith('networks/')).where($.name.endsWith('.yaml')).name %>\n on-success: get_network_files\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_network_files:\n with-items: network_name in <% $.network_names %>\n action: swift.get_object\n on-success: transform_output\n on-error: notify_zaqar\n input:\n container: <% $.container %>\n obj: <% $.network_name %>\n publish:\n status: SUCCESS\n available_yaml_networks: <% task().result.select($[1]) %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n transform_output:\n publish:\n status: SUCCESS\n available_networks: <% yaml_parse($.available_yaml_networks.join(\"\\n\")) %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-complete: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.list_available_networks\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n available_networks: <% $.get('available_networks', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n list_networks:\n input:\n - container: 'overcloud'\n - network_data_file: 'network_data.yaml'\n - queue_name: tripleo\n\n output:\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_networks:\n action: swift.get_object\n input:\n container: <% $.container %>\n obj: <% $.network_data_file %>\n on-success: notify_zaqar\n publish:\n network_data: <% yaml_parse(task().result.last()) %>\n status: SUCCESS\n message: <% task().result %>\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.list_networks\n payload:\n status: <% $.status %>\n network_data: <% $.get('network_data', {}) %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n validate_network_files:\n description: Validate network files exist\n input:\n - container: overcloud\n - network_data\n - queue_name: tripleo\n\n output:\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_network_names:\n publish:\n network_names_lower: <% $.network_data.where($.containsKey('name_lower')).name_lower %>\n network_names: <% $.network_data.where(not $.containsKey('name_lower')).name %>\n on-success: validate_networks\n\n validate_networks:\n with-items: network in <% $.network_names_lower.concat($.network_names) %>\n action: swift.head_object\n input:\n container: <% $.container %>\n obj: network/<% $.network.toLower() %>.yaml\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.validate_network_files\n payload:\n status: <% $.status %>\n message: <% $.message %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n validate_networks:\n description: Validate network files were generated properly and exist\n input:\n - container: 'overcloud'\n - network_data_file: 'network_data.yaml'\n - queue_name: tripleo\n\n output:\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_network_data:\n workflow: list_networks\n input:\n container: <% $.container %>\n network_data_file: <% $.network_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().result.network_data %>\n on-success: validate_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error:\n notify_zaqar\n\n validate_networks:\n workflow: validate_network_files\n input:\n container: <% $.container %>\n network_data: <% $.network_data %>\n queue_name: <% $.queue_name %>\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.validate_networks\n payload:\n status: <% $.status %>\n network_data: <% $.get('network_data', {}) %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n validate_roles:\n description: Vaildate roles data exists and is parsable\n\n input:\n - container: overcloud\n - roles_data_file: 'roles_data.yaml'\n - queue_name: tripleo\n\n output:\n roles_data: <% $.roles_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_roles_data:\n workflow: list_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n roles_data: <% task().result.roles_data %>\n status: SUCCESS\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error:\n notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.validate_networks\n payload:\n status: <% $.status %>\n roles_data: <% $.get('roles_data', '') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n _validate_networks_from_roles:\n description: Internal workflow for validating a network exists from a role\n\n input:\n - container: overcloud\n - defined_networks\n - networks_in_roles\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n validate_network_in_network_data:\n publish:\n networks_found: <% $.networks_in_roles.toSet().intersect($.defined_networks.toSet()) %>\n networks_not_found: <% $.networks_in_roles.toSet().difference($.defined_networks.toSet()) %>\n on-success:\n - network_not_found: <% $.networks_not_found %>\n - notify_zaqar: <% not $.networks_not_found %>\n\n network_not_found:\n publish:\n message: <% \"Some networks in roles are not defined, {0}\".format($.networks_not_found.join(', ')) %>\n status: FAILED\n on-success: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1._validate_networks_from_role\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n validate_roles_and_networks:\n description: Vaidate that roles and network data are valid\n\n input:\n - container: overcloud\n - roles_data_file: 'roles_data.yaml'\n - network_data_file: 'network_data.yaml'\n - queue_name: tripleo\n\n output:\n roles_data: <% $.roles_data %>\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n validate_network_data:\n workflow: validate_networks\n input:\n container: <% $.container %>\n network_data_file: <% $.network_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().result.network_data %>\n on-success: validate_roles_data\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n validate_roles_data:\n workflow: validate_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n roles_data: <% task().result.roles_data %>\n role_networks_data: <% task().result.roles_data.networks %>\n networks_in_roles: <% task().result.roles_data.networks.flatten().distinct() %>\n on-success: validate_roles_and_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n validate_roles_and_networks:\n workflow: _validate_networks_from_roles\n input:\n container: <% $.container %>\n defined_networks: <% $.network_data.name %>\n networks_in_roles: <% $.networks_in_roles %>\n queue_name: <% $.queue_name %>\n publish:\n status: SUCCESS\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result.message %>\n on-error: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.validate_roles_and_networks\n payload:\n status: <% $.status %>\n roles_data: <% $.get('roles_data', {}) %>\n network_data: <% $.get('network_data', {}) %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n list_available_roles:\n input:\n - container: overcloud\n - queue_name: tripleo\n\n output:\n available_roles: <% $.available_roles %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_role_file_names:\n action: swift.get_container\n input:\n container: <% $.container %>\n publish:\n role_names: <% task().result[1].where($.name.startsWith('roles/')).where($.name.endsWith('.yaml')).name %>\n on-success: get_role_files\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_role_files:\n with-items: role_name in <% $.role_names %>\n action: swift.get_object\n on-success: transform_output\n on-error: notify_zaqar\n input:\n container: <% $.container %>\n obj: <% $.role_name %>\n publish:\n status: SUCCESS\n available_yaml_roles: <% task().result.select($[1]) %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n transform_output:\n publish:\n status: SUCCESS\n available_roles: <% yaml_parse($.available_yaml_roles.join(\"\\n\")) %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-complete: notify_zaqar\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.list_available_roles\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n available_roles: <% $.get('available_roles', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n update_roles:\n description: >\n takes data in json format validates its contents and persists them in\n roles_data.yaml, after successful update, templates are regenerated.\n input:\n - container\n - roles\n - roles_data_file: 'roles_data.yaml'\n - replace_all: false\n - queue_name: tripleo\n tags:\n - tripleo-common-managed\n tasks:\n get_available_roles:\n workflow: list_available_roles\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name%>\n publish:\n available_roles: <% task().result.available_roles %>\n on-success: validate_input\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n validate_input:\n description: >\n validate the format of input (verify that each role in input has the\n required attributes set. check README in roles directory in t-h-t),\n validate that roles in input exist in roles directory in t-h-t\n action: tripleo.plan.validate_roles\n input:\n container: <% $.container %>\n roles: <% $.roles %>\n available_roles: <% $.available_roles %>\n on-success: get_network_data\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_network_data:\n workflow: list_networks\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().result.network_data %>\n on-success: validate_network_names\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n validate_network_names:\n description: >\n validate that Network names assigned to Role exist in\n network-data.yaml object in Swift container\n workflow: _validate_networks_from_roles\n input:\n container: <% $.container %>\n defined_networks: <% $.network_data.name %>\n networks_in_roles: <% $.roles.networks.flatten().distinct() %>\n queue_name: <% $.queue_name %>\n on-success: get_current_roles\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result.message %>\n\n get_current_roles:\n workflow: list_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n current_roles: <% task().result.roles_data %>\n on-success: update_roles_data\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n update_roles_data:\n description: >\n update roles_data.yaml object in Swift with roles from workflow input\n action: tripleo.plan.update_roles\n input:\n container: <% $.container %>\n roles: <% $.roles %>\n current_roles: <% $.current_roles %>\n replace_all: <% $.replace_all %>\n publish:\n updated_roles_data: <% task().result.roles %>\n on-success: update_roles_data_in_swift\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n update_roles_data_in_swift:\n description: >\n update roles_data.yaml object in Swift with data from workflow input\n action: swift.put_object\n input:\n container: <% $.container %>\n obj: <% $.roles_data_file %>\n contents: <% yaml_dump($.updated_roles_data) %>\n on-success: regenerate_templates\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n regenerate_templates:\n action: tripleo.templates.process container=<% $.container %>\n on-success: get_updated_roles\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_updated_roles:\n workflow: list_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n publish:\n updated_roles: <% task().result.roles_data %>\n status: SUCCESS\n on-complete: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.roles.v1.update_roles\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n updated_roles: <% $.get('updated_roles', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n select_roles:\n description: >\n takes a list of role names as input and populates roles_data.yaml in\n container in Swift with respective roles from 'roles directory'\n input:\n - container\n - role_names\n - roles_data_file: 'roles_data.yaml'\n - replace_all: true\n - queue_name: tripleo\n tags:\n - tripleo-common-managed\n tasks:\n\n get_available_roles:\n workflow: list_available_roles\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n publish:\n available_roles: <% task().result.available_roles %>\n on-success: get_current_roles\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_current_roles:\n workflow: list_roles\n input:\n container: <% $.container %>\n roles_data_file: <% $.roles_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n current_roles: <% task().result.roles_data %>\n on-success: gather_roles\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n gather_roles:\n description: >\n for each role name from the input, check if it exists in\n roles_data.yaml, if yes, use that role definition, if not, get the\n role definition from roles directory. Use the gathered roles\n definitions as input to updateRolesWorkflow - this ensures\n configuration of the roles which are already in roles_data.yaml\n will not get overridden by data from roles directory\n action: tripleo.plan.gather_roles\n input:\n role_names: <% $.role_names %>\n current_roles: <% $.current_roles %>\n available_roles: <% $.available_roles %>\n publish:\n gathered_roles: <% task().result.gathered_roles %>\n on-success: call_update_roles_workflow\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n call_update_roles_workflow:\n workflow: update_roles\n input:\n container: <% $.container %>\n roles: <% $.gathered_roles %>\n roles_data_file: <% $.roles_data_file %>\n replace_all: <% $.replace_all %>\n queue_name: <% $.queue_name %>\n on-complete: notify_zaqar\n publish:\n selected_roles: <% task().result.updated_roles %>\n status: SUCCESS\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.plan_management.v1.select_roles\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n selected_roles: <% $.get('selected_roles', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.plan_management.v1", "tags": [], "created_at": "2018-06-26 05:45:16", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "d1c5bc9e-b39c-4748-b1b8-3e6ef3614339"} > >2018-06-26 11:15:17,085 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:17,086 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.support.v1 >description: TripleO support workflows > >workflows: > > collect_logs: > description: > > This workflow runs sosreport on the servers where their names match the > provided server_name input. The logs are stored in the provided sos_dir. > input: > - server_name > - sos_dir: /var/tmp/tripleo-sos > - sos_options: boot,cluster,hardware,kernel,memory,nfs,openstack,packagemanager,performance,services,storage,system,webserver,virt > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > collect_logs_on_servers: > workflow: tripleo.deployment.v1.deploy_on_servers > on-success: send_message > on-error: set_collect_logs_on_servers_failed > input: > server_name: <% $.server_name %> > config_name: 'run_sosreport' > config: | > #!/bin/bash > mkdir -p <% $.sos_dir %> > sosreport --batch \ > -p <% $.sos_options %> \ > --tmp-dir <% $.sos_dir %> > > set_collect_logs_on_servers_failed: > on-complete: > - send_message > publish: > type: tripleo.deployment.v1.fetch_logs > status: FAILED > message: <% task().result %> > > # status messaging > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: <% $.get('type', 'tripleo.support.v1.collect_logs') %> > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = 'FAILED' %> > > upload_logs: > description: > > This workflow uploads the sosreport files stored in the provide sos_dir > on the provided host (server_uuid) to a swift container on the undercloud > input: > - server_uuid > - server_name > - container > - sos_dir: /var/tmp/tripleo-sos > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > # actions > get_swift_information: > action: tripleo.swift.swift_information > on-success: do_log_upload > on-error: set_get_swift_information_failed > input: > container: <% $.container %> > publish: > container_url: <% task().result.container_url %> > auth_key: <% task().result.auth_key %> > > set_get_swift_information_failed: > on-complete: > - send_message > publish: > status: FAILED > message: <% task(get_swift_information).result %> > > do_log_upload: > action: tripleo.deployment.config > on-success: send_message > on-error: set_do_log_upload_failed > input: > server_id: <% $.server_uuid %> > name: "upload_logs" > config: | > #!/bin/bash > CONTAINER_URL="<% $.container_url %>" > TOKEN="<% $.auth_key %>" > SOS_DIR="<% $.sos_dir %>" > for FILE in $(find $SOS_DIR -type f); do > FILENAME=$(basename $FILE) > curl -X PUT -i -H "X-Auth-Token: $TOKEN" -T $FILE $CONTAINER_URL/$FILENAME > if [ $? -eq 0 ]; then > rm -f $FILE > fi > done > group: "script" > publish: > message: "Uploaded logs from <% $.server_name %>" > > set_do_log_upload_failed: > on-complete: > - send_message > publish: > status: FAILED > message: <% tag(do_log_upload).result %> > > # status messaging > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: <% $.get('type', 'tripleo.support.v1.upload_logs') %> > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = 'FAILED' %> > > create_container: > description: > > This work flow is used to check if the container exists and creates it > if it does not exist. > input: > - container > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > check_container: > action: swift.head_container container=<% $.container %> > on-success: send_message > on-error: create_container > > create_container: > action: swift.put_container > input: > container: <% $.container %> > headers: > x-container-meta-usage-tripleo: support > on-success: send_message > on-error: set_create_container_failed > > set_create_container_failed: > on-complete: > - send_message > publish: > type: tripleo.support.v1.create_container.create_container > status: FAILED > message: <% task(create_container).result %> > > # status messaging > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: <% $.get('type', 'tripleo.support.v1.create_container') %> > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = 'FAILED' %> > > delete_container: > description: > > This workflow deletes all the objects in a provided swift container and > then removes the container itself from the undercloud. > input: > - container > - concurrency: 5 > - timeout: 900 > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > # actions > check_container: > action: swift.head_container container=<% $.container %> > on-success: list_objects > on-error: set_check_container_failure > > set_check_container_failure: > on-complete: send_message > publish: > status: FAILED > type: tripleo.support.v1.delete_container.check_container > message: <% task(check_container).result %> > > list_objects: > action: swift.get_container container=<% $.container %> > on-success: delete_objects > on-error: set_list_objects_failure > publish: > log_objects: <% task().result[1] %> > > set_list_objects_failure: > on-complete: send_message > publish: > status: FAILED > type: tripleo.support.v1.delete_container.list_objects > message: <% task(list_objects).result %> > > delete_objects: > action: swift.delete_object > concurrency: <% $.concurrency %> > timeout: <% $.timeout %> > with-items: object in <% $.log_objects %> > input: > container: <% $.container %> > obj: <% $.object.name %> > on-success: remove_container > on-error: set_delete_objects_failure > > set_delete_objects_failure: > on-complete: send_message > publish: > status: FAILED > type: tripleo.support.v1.delete_container.delete_objects > message: <% task(delete_objects).result %> > > remove_container: > action: swift.delete_container container=<% $.container %> > on-success: send_message > on-error: set_remove_container_failure > > set_remove_container_failure: > on-complete: send_message > publish: > status: FAILED > type: tripleo.support.v1.delete_container.remove_container > message: <% task(remove_container).result %> > > # status messaging > send_message: > action: zaqar.queue_post > wait-before: 5 > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: <% $.get('type', 'tripleo.support.v1.delete_container') %> > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = 'FAILED' %> > > fetch_logs: > description: > > This workflow creates a container on the undercloud, executes the log > collection on the servers whose names match the provided server_name, and > executes the log upload process on all the servers to the container on > the undercloud. > input: > - server_name > - container > - concurrency: 5 > - timeout: 1800 > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > # actions > create_container: > workflow: tripleo.support.v1.create_container > on-success: get_servers_matching > on-error: set_create_container_failed > input: > container: <% $.container %> > queue_name: <% $.queue_name %> > > set_create_container_failed: > on-complete: send_message > publish: > type: tripleo.support.v1.fetch_logs.create_container > status: FAILED > message: <% task(create_container).result %> > > get_servers_matching: > action: nova.servers_list > on-success: collect_logs_on_servers > publish: > servers_with_name: <% task().result._info.where($.name.indexOf(execution().input.server_name) > -1) %> > > collect_logs_on_servers: > workflow: tripleo.support.v1.collect_logs > timeout: <% $.timeout %> > on-success: upload_logs_on_servers > on-error: set_collect_logs_on_servers_failed > input: > server_name: <% $.server_name %> > queue_name: <% $.queue_name %> > > set_collect_logs_on_servers_failed: > on-complete: send_message > publish: > type: tripleo.support.v1.fetch_logs.collect_logs_on_servers > status: FAILED > message: <% task(collect_logs_on_servers).result %> > > upload_logs_on_servers: > on-success: send_message > on-error: set_upload_logs_on_servers_failed > with-items: server in <% $.servers_with_name %> > concurrency: <% $.concurrency %> > workflow: tripleo.support.v1.upload_logs > input: > server_name: <% $.server.name %> > server_uuid: <% $.server.id %> > container: <% $.container %> > queue_name: <% $.queue_name %> > > set_upload_logs_on_servers_failed: > on-complete: send_message > publish: > type: tripleo.support.v1.fetch_logs.upload_logs > status: FAILED > message: <% task(upload_logs_on_servers).result %> > > # status messaging > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: <% $.get('type', 'tripleo.support.v1.fetch_logs') %> > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = 'FAILED' %> >' >2018-06-26 11:15:17,941 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 12000 >2018-06-26 11:15:17,943 DEBUG: RESP: [201] Content-Length: 12000 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:17 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.support.v1\ndescription: TripleO support workflows\n\nworkflows:\n\n collect_logs:\n description: >\n This workflow runs sosreport on the servers where their names match the\n provided server_name input. The logs are stored in the provided sos_dir.\n input:\n - server_name\n - sos_dir: /var/tmp/tripleo-sos\n - sos_options: boot,cluster,hardware,kernel,memory,nfs,openstack,packagemanager,performance,services,storage,system,webserver,virt\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n collect_logs_on_servers:\n workflow: tripleo.deployment.v1.deploy_on_servers\n on-success: send_message\n on-error: set_collect_logs_on_servers_failed\n input:\n server_name: <% $.server_name %>\n config_name: 'run_sosreport'\n config: |\n #!/bin/bash\n mkdir -p <% $.sos_dir %>\n sosreport --batch \\\n -p <% $.sos_options %> \\\n --tmp-dir <% $.sos_dir %>\n\n set_collect_logs_on_servers_failed:\n on-complete:\n - send_message\n publish:\n type: tripleo.deployment.v1.fetch_logs\n status: FAILED\n message: <% task().result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.collect_logs') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n\n upload_logs:\n description: >\n This workflow uploads the sosreport files stored in the provide sos_dir\n on the provided host (server_uuid) to a swift container on the undercloud\n input:\n - server_uuid\n - server_name\n - container\n - sos_dir: /var/tmp/tripleo-sos\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n # actions\n get_swift_information:\n action: tripleo.swift.swift_information\n on-success: do_log_upload\n on-error: set_get_swift_information_failed\n input:\n container: <% $.container %>\n publish:\n container_url: <% task().result.container_url %>\n auth_key: <% task().result.auth_key %>\n\n set_get_swift_information_failed:\n on-complete:\n - send_message\n publish:\n status: FAILED\n message: <% task(get_swift_information).result %>\n\n do_log_upload:\n action: tripleo.deployment.config\n on-success: send_message\n on-error: set_do_log_upload_failed\n input:\n server_id: <% $.server_uuid %>\n name: \"upload_logs\"\n config: |\n #!/bin/bash\n CONTAINER_URL=\"<% $.container_url %>\"\n TOKEN=\"<% $.auth_key %>\"\n SOS_DIR=\"<% $.sos_dir %>\"\n for FILE in $(find $SOS_DIR -type f); do\n FILENAME=$(basename $FILE)\n curl -X PUT -i -H \"X-Auth-Token: $TOKEN\" -T $FILE $CONTAINER_URL/$FILENAME\n if [ $? -eq 0 ]; then\n rm -f $FILE\n fi\n done\n group: \"script\"\n publish:\n message: \"Uploaded logs from <% $.server_name %>\"\n\n set_do_log_upload_failed:\n on-complete:\n - send_message\n publish:\n status: FAILED\n message: <% tag(do_log_upload).result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.upload_logs') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n\n create_container:\n description: >\n This work flow is used to check if the container exists and creates it\n if it does not exist.\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n check_container:\n action: swift.head_container container=<% $.container %>\n on-success: send_message\n on-error: create_container\n\n create_container:\n action: swift.put_container\n input:\n container: <% $.container %>\n headers:\n x-container-meta-usage-tripleo: support\n on-success: send_message\n on-error: set_create_container_failed\n\n set_create_container_failed:\n on-complete:\n - send_message\n publish:\n type: tripleo.support.v1.create_container.create_container\n status: FAILED\n message: <% task(create_container).result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.create_container') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n\n delete_container:\n description: >\n This workflow deletes all the objects in a provided swift container and\n then removes the container itself from the undercloud.\n input:\n - container\n - concurrency: 5\n - timeout: 900\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n # actions\n check_container:\n action: swift.head_container container=<% $.container %>\n on-success: list_objects\n on-error: set_check_container_failure\n\n set_check_container_failure:\n on-complete: send_message\n publish:\n status: FAILED\n type: tripleo.support.v1.delete_container.check_container\n message: <% task(check_container).result %>\n\n list_objects:\n action: swift.get_container container=<% $.container %>\n on-success: delete_objects\n on-error: set_list_objects_failure\n publish:\n log_objects: <% task().result[1] %>\n\n set_list_objects_failure:\n on-complete: send_message\n publish:\n status: FAILED\n type: tripleo.support.v1.delete_container.list_objects\n message: <% task(list_objects).result %>\n\n delete_objects:\n action: swift.delete_object\n concurrency: <% $.concurrency %>\n timeout: <% $.timeout %>\n with-items: object in <% $.log_objects %>\n input:\n container: <% $.container %>\n obj: <% $.object.name %>\n on-success: remove_container\n on-error: set_delete_objects_failure\n\n set_delete_objects_failure:\n on-complete: send_message\n publish:\n status: FAILED\n type: tripleo.support.v1.delete_container.delete_objects\n message: <% task(delete_objects).result %>\n\n remove_container:\n action: swift.delete_container container=<% $.container %>\n on-success: send_message\n on-error: set_remove_container_failure\n\n set_remove_container_failure:\n on-complete: send_message\n publish:\n status: FAILED\n type: tripleo.support.v1.delete_container.remove_container\n message: <% task(remove_container).result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n wait-before: 5\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.delete_container') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n\n fetch_logs:\n description: >\n This workflow creates a container on the undercloud, executes the log\n collection on the servers whose names match the provided server_name, and\n executes the log upload process on all the servers to the container on\n the undercloud.\n input:\n - server_name\n - container\n - concurrency: 5\n - timeout: 1800\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n # actions\n create_container:\n workflow: tripleo.support.v1.create_container\n on-success: get_servers_matching\n on-error: set_create_container_failed\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n\n set_create_container_failed:\n on-complete: send_message\n publish:\n type: tripleo.support.v1.fetch_logs.create_container\n status: FAILED\n message: <% task(create_container).result %>\n\n get_servers_matching:\n action: nova.servers_list\n on-success: collect_logs_on_servers\n publish:\n servers_with_name: <% task().result._info.where($.name.indexOf(execution().input.server_name) > -1) %>\n\n collect_logs_on_servers:\n workflow: tripleo.support.v1.collect_logs\n timeout: <% $.timeout %>\n on-success: upload_logs_on_servers\n on-error: set_collect_logs_on_servers_failed\n input:\n server_name: <% $.server_name %>\n queue_name: <% $.queue_name %>\n\n set_collect_logs_on_servers_failed:\n on-complete: send_message\n publish:\n type: tripleo.support.v1.fetch_logs.collect_logs_on_servers\n status: FAILED\n message: <% task(collect_logs_on_servers).result %>\n\n upload_logs_on_servers:\n on-success: send_message\n on-error: set_upload_logs_on_servers_failed\n with-items: server in <% $.servers_with_name %>\n concurrency: <% $.concurrency %>\n workflow: tripleo.support.v1.upload_logs\n input:\n server_name: <% $.server.name %>\n server_uuid: <% $.server.id %>\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n\n set_upload_logs_on_servers_failed:\n on-complete: send_message\n publish:\n type: tripleo.support.v1.fetch_logs.upload_logs\n status: FAILED\n message: <% task(upload_logs_on_servers).result %>\n\n # status messaging\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: <% $.get('type', 'tripleo.support.v1.fetch_logs') %>\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n", "name": "tripleo.support.v1", "tags": [], "created_at": "2018-06-26 05:45:17", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "bfc117fb-b80e-403f-8552-d75b6bd4789b"} > >2018-06-26 11:15:17,943 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:17,944 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.deployment.v1 >description: TripleO deployment workflows > >workflows: > > deploy_on_server: > > input: > - server_uuid > - server_name > - config > - config_name > - group > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > deploy_config: > action: tripleo.deployment.config > on-complete: send_message > input: > server_id: <% $.server_uuid %> > name: <% $.config_name %> > config: <% $.config %> > group: <% $.group %> > publish: > stdout: <% task().result.deploy_stdout %> > stderr: <% task().result.deploy_stderr %> > status_code: <% task().result.deploy_status_code %> > publish-on-error: > status: FAILED > message: <% task().result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.deployment.v1.deploy_on_server > payload: > status: <% $.get("status", "SUCCESS") %> > message: <% $.get("message", "") %> > server_uuid: <% $.server_uuid %> > server_name: <% $.server_name %> > config_name: <% $.config_name %> > status_code: <% $.get("status_code", "") %> > stdout: <% $.get("stdout", "") %> > stderr: <% $.get("stderr", "") %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > deploy_on_servers: > > input: > - server_name > - config_name > - config > - group: script > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > check_if_all_servers: > on-success: > - get_servers_matching: <% $.server_name != "all" %> > - get_all_servers: <% $.server_name = "all" %> > > get_servers_matching: > action: nova.servers_list > on-success: deploy_on_servers > publish: > servers_with_name: <% task().result._info.where($.name.indexOf(execution().input.server_name) > -1) %> > > get_all_servers: > action: nova.servers_list > on-success: deploy_on_servers > publish: > servers_with_name: <% task().result._info %> > > deploy_on_servers: > on-success: send_success_message > on-error: send_failed_message > with-items: server in <% $.servers_with_name %> > workflow: tripleo.deployment.v1.deploy_on_server > input: > server_name: <% $.server.name %> > server_uuid: <% $.server.id %> > config: <% $.config %> > config_name: <% $.config_name %> > group: <% $.group %> > queue_name: <% $.queue_name %> > > send_success_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.deployment.v1.deploy_on_servers > payload: > status: SUCCESS > execution: <% execution() %> > > send_failed_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.deployment.v1.deploy_on_servers > payload: > status: FAILED > message: <% task(deploy_on_servers).result %> > execution: <% execution() %> > on-success: fail > > deploy_plan: > > description: > > Deploy the overcloud for a plan. > > input: > - container > - run_validations: False > - timeout: 240 > - skip_deploy_identifier: False > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > add_validation_ssh_key: > workflow: tripleo.validations.v1.add_validation_ssh_key_parameter > input: > container: <% $.container %> > queue_name: <% $.queue_name %> > on-complete: > - run_validations: <% $.run_validations %> > - create_swift_rings_backup_plan: <% not $.run_validations %> > > run_validations: > workflow: tripleo.validations.v1.run_groups > input: > group_names: > - 'pre-deployment' > plan: <% $.container %> > queue_name: <% $.queue_name %> > on-success: create_swift_rings_backup_plan > on-error: set_validations_failed > > set_validations_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(run_validations).result %> > > create_swift_rings_backup_plan: > workflow: tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan > on-success: cell_v2_discover_hosts > on-error: create_swift_rings_backup_plan_set_status_failed > input: > container: <% $.container %> > queue_name: <% $.queue_name %> > use_default_templates: true > > cell_v2_discover_hosts: > on-success: deploy > on-error: cell_v2_discover_hosts_failed > action: tripleo.baremetal.cell_v2_discover_hosts > > cell_v2_discover_hosts_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(cell_v2_discover_hosts).result %> > > deploy: > action: tripleo.deployment.deploy > input: > timeout: <% $.timeout %> > container: <% $.container %> > skip_deploy_identifier: <% $.skip_deploy_identifier %> > on-success: send_message > on-error: set_deployment_failed > > create_swift_rings_backup_plan_set_status_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(create_swift_rings_backup_plan).result %> > > set_deployment_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(deploy).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.deployment.v1.deploy_plan > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > get_horizon_url: > > description: > > Retrieve the Horizon URL from the Overcloud stack. > > input: > - stack: overcloud > - queue_name: tripleo > > tags: > - tripleo-common-managed > > output: > horizon_url: <% $.horizon_url %> > > tasks: > get_horizon_url: > action: heat.stacks_get > input: > stack_id: <% $.stack %> > publish: > horizon_url: <% task().result.outputs.where($.output_key = "EndpointMap").output_value.HorizonPublic.uri.single() %> > on-success: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > notify_zaqar: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.deployment.v1.get_horizon_url > payload: > horizon_url: <% $.get('horizon_url', '') %> > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > config_download_deploy: > > description: > > Configure the overcloud with config-download. > > input: > - timeout: 240 > - queue_name: tripleo > - plan_name: overcloud > - work_dir: /var/lib/mistral > - verbosity: 1 > > tags: > - tripleo-common-managed > > tasks: > > get_config: > action: tripleo.config.get_overcloud_config > input: > container: <% $.get('plan_name') %> > on-success: download_config > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > download_config: > action: tripleo.config.download_config > input: > work_dir: <% $.get('work_dir') %>/<% execution().id %> > on-success: send_msg_config_download > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > send_msg_config_download: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.deployment.v1.config_download > payload: > status: <% $.get('status', 'RUNNING') %> > message: Config downloaded at <% $.get('work_dir') %>/<% execution().id %> > execution: <% execution() %> > on-success: get_private_key > > get_private_key: > action: tripleo.validations.get_privkey > publish: > private_key: <% task().result %> > on-success: generate_inventory > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > generate_inventory: > action: tripleo.ansible-generate-inventory > input: > ansible_ssh_user: tripleo-admin > work_dir: <% $.get('work_dir') %>/<% execution().id %> > plan_name: <% $.get('plan_name') %> > publish: > inventory: <% task().result %> > on-success: send_msg_generate_inventory > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > send_msg_generate_inventory: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.deployment.v1.config_download > payload: > status: <% $.get('status', 'RUNNING') %> > message: Inventory generated at <% $.get('inventory') %> > execution: <% execution() %> > on-success: send_msg_run_ansible > > send_msg_run_ansible: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.deployment.v1.config_download > payload: > status: <% $.get('status', 'RUNNING') %> > message: > > Running ansible playbook at <% $.get('work_dir') %>/<% execution().id %>/deploy_steps_playbook.yaml. > See log file at <% $.get('work_dir') %>/<% execution().id %>/ansible.log for progress. > ... > execution: <% execution() %> > on-success: run_ansible > > run_ansible: > action: tripleo.ansible-playbook > input: > inventory: <% $.inventory %> > playbook: <% $.get('work_dir') %>/<% execution().id %>/deploy_steps_playbook.yaml > remote_user: tripleo-admin > ssh_extra_args: '-o StrictHostKeyChecking=no' > ssh_private_key: <% $.private_key %> > use_openstack_credentials: true > verbosity: <% $.get('verbosity') %> > become: true > timeout: <% $.timeout %> > work_dir: <% $.get('work_dir') %>/<% execution().id %> > queue_name: <% $.queue_name %> > reproduce_command: true > trash_output: true > publish: > log_path: <% task(run_ansible).result.get('log_path') %> > on-success: > - ansible_passed: <% task().result.returncode = 0 %> > - ansible_failed: <% task().result.returncode != 0 %> > on-error: send_message > publish-on-error: > status: FAILED > message: Ansible failed, check log at <% $.get('work_dir') %>/<% execution().id %>/ansible.log. > > ansible_passed: > on-success: send_message > publish: > status: SUCCESS > message: Ansible passed. > > ansible_failed: > on-success: send_message > publish: > status: FAILED > message: Ansible failed, check log at <% $.get('work_dir') %>/<% execution().id %>/ansible.log. > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.deployment.v1.config_download > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 11:15:18,625 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 13556 >2018-06-26 11:15:18,626 DEBUG: RESP: [201] Content-Length: 13556 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:18 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.deployment.v1\ndescription: TripleO deployment workflows\n\nworkflows:\n\n deploy_on_server:\n\n input:\n - server_uuid\n - server_name\n - config\n - config_name\n - group\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n deploy_config:\n action: tripleo.deployment.config\n on-complete: send_message\n input:\n server_id: <% $.server_uuid %>\n name: <% $.config_name %>\n config: <% $.config %>\n group: <% $.group %>\n publish:\n stdout: <% task().result.deploy_stdout %>\n stderr: <% task().result.deploy_stderr %>\n status_code: <% task().result.deploy_status_code %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.deploy_on_server\n payload:\n status: <% $.get(\"status\", \"SUCCESS\") %>\n message: <% $.get(\"message\", \"\") %>\n server_uuid: <% $.server_uuid %>\n server_name: <% $.server_name %>\n config_name: <% $.config_name %>\n status_code: <% $.get(\"status_code\", \"\") %>\n stdout: <% $.get(\"stdout\", \"\") %>\n stderr: <% $.get(\"stderr\", \"\") %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n deploy_on_servers:\n\n input:\n - server_name\n - config_name\n - config\n - group: script\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n check_if_all_servers:\n on-success:\n - get_servers_matching: <% $.server_name != \"all\" %>\n - get_all_servers: <% $.server_name = \"all\" %>\n\n get_servers_matching:\n action: nova.servers_list\n on-success: deploy_on_servers\n publish:\n servers_with_name: <% task().result._info.where($.name.indexOf(execution().input.server_name) > -1) %>\n\n get_all_servers:\n action: nova.servers_list\n on-success: deploy_on_servers\n publish:\n servers_with_name: <% task().result._info %>\n\n deploy_on_servers:\n on-success: send_success_message\n on-error: send_failed_message\n with-items: server in <% $.servers_with_name %>\n workflow: tripleo.deployment.v1.deploy_on_server\n input:\n server_name: <% $.server.name %>\n server_uuid: <% $.server.id %>\n config: <% $.config %>\n config_name: <% $.config_name %>\n group: <% $.group %>\n queue_name: <% $.queue_name %>\n\n send_success_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.deploy_on_servers\n payload:\n status: SUCCESS\n execution: <% execution() %>\n\n send_failed_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.deploy_on_servers\n payload:\n status: FAILED\n message: <% task(deploy_on_servers).result %>\n execution: <% execution() %>\n on-success: fail\n\n deploy_plan:\n\n description: >\n Deploy the overcloud for a plan.\n\n input:\n - container\n - run_validations: False\n - timeout: 240\n - skip_deploy_identifier: False\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n add_validation_ssh_key:\n workflow: tripleo.validations.v1.add_validation_ssh_key_parameter\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n on-complete:\n - run_validations: <% $.run_validations %>\n - create_swift_rings_backup_plan: <% not $.run_validations %>\n\n run_validations:\n workflow: tripleo.validations.v1.run_groups\n input:\n group_names:\n - 'pre-deployment'\n plan: <% $.container %>\n queue_name: <% $.queue_name %>\n on-success: create_swift_rings_backup_plan\n on-error: set_validations_failed\n\n set_validations_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(run_validations).result %>\n\n create_swift_rings_backup_plan:\n workflow: tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan\n on-success: cell_v2_discover_hosts\n on-error: create_swift_rings_backup_plan_set_status_failed\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n use_default_templates: true\n\n cell_v2_discover_hosts:\n on-success: deploy\n on-error: cell_v2_discover_hosts_failed\n action: tripleo.baremetal.cell_v2_discover_hosts\n\n cell_v2_discover_hosts_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(cell_v2_discover_hosts).result %>\n\n deploy:\n action: tripleo.deployment.deploy\n input:\n timeout: <% $.timeout %>\n container: <% $.container %>\n skip_deploy_identifier: <% $.skip_deploy_identifier %>\n on-success: send_message\n on-error: set_deployment_failed\n\n create_swift_rings_backup_plan_set_status_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(create_swift_rings_backup_plan).result %>\n\n set_deployment_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(deploy).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.deploy_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n get_horizon_url:\n\n description: >\n Retrieve the Horizon URL from the Overcloud stack.\n\n input:\n - stack: overcloud\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n output:\n horizon_url: <% $.horizon_url %>\n\n tasks:\n get_horizon_url:\n action: heat.stacks_get\n input:\n stack_id: <% $.stack %>\n publish:\n horizon_url: <% task().result.outputs.where($.output_key = \"EndpointMap\").output_value.HorizonPublic.uri.single() %>\n on-success: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.get_horizon_url\n payload:\n horizon_url: <% $.get('horizon_url', '') %>\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n config_download_deploy:\n\n description: >\n Configure the overcloud with config-download.\n\n input:\n - timeout: 240\n - queue_name: tripleo\n - plan_name: overcloud\n - work_dir: /var/lib/mistral\n - verbosity: 1\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_config:\n action: tripleo.config.get_overcloud_config\n input:\n container: <% $.get('plan_name') %>\n on-success: download_config\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n download_config:\n action: tripleo.config.download_config\n input:\n work_dir: <% $.get('work_dir') %>/<% execution().id %>\n on-success: send_msg_config_download\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n send_msg_config_download:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.config_download\n payload:\n status: <% $.get('status', 'RUNNING') %>\n message: Config downloaded at <% $.get('work_dir') %>/<% execution().id %>\n execution: <% execution() %>\n on-success: get_private_key\n\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: generate_inventory\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n generate_inventory:\n action: tripleo.ansible-generate-inventory\n input:\n ansible_ssh_user: tripleo-admin\n work_dir: <% $.get('work_dir') %>/<% execution().id %>\n plan_name: <% $.get('plan_name') %>\n publish:\n inventory: <% task().result %>\n on-success: send_msg_generate_inventory\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n send_msg_generate_inventory:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.config_download\n payload:\n status: <% $.get('status', 'RUNNING') %>\n message: Inventory generated at <% $.get('inventory') %>\n execution: <% execution() %>\n on-success: send_msg_run_ansible\n\n send_msg_run_ansible:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.config_download\n payload:\n status: <% $.get('status', 'RUNNING') %>\n message: >\n Running ansible playbook at <% $.get('work_dir') %>/<% execution().id %>/deploy_steps_playbook.yaml.\n See log file at <% $.get('work_dir') %>/<% execution().id %>/ansible.log for progress.\n ...\n execution: <% execution() %>\n on-success: run_ansible\n\n run_ansible:\n action: tripleo.ansible-playbook\n input:\n inventory: <% $.inventory %>\n playbook: <% $.get('work_dir') %>/<% execution().id %>/deploy_steps_playbook.yaml\n remote_user: tripleo-admin\n ssh_extra_args: '-o StrictHostKeyChecking=no'\n ssh_private_key: <% $.private_key %>\n use_openstack_credentials: true\n verbosity: <% $.get('verbosity') %>\n become: true\n timeout: <% $.timeout %>\n work_dir: <% $.get('work_dir') %>/<% execution().id %>\n queue_name: <% $.queue_name %>\n reproduce_command: true\n trash_output: true\n publish:\n log_path: <% task(run_ansible).result.get('log_path') %>\n on-success:\n - ansible_passed: <% task().result.returncode = 0 %>\n - ansible_failed: <% task().result.returncode != 0 %>\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: Ansible failed, check log at <% $.get('work_dir') %>/<% execution().id %>/ansible.log.\n\n ansible_passed:\n on-success: send_message\n publish:\n status: SUCCESS\n message: Ansible passed.\n\n ansible_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: Ansible failed, check log at <% $.get('work_dir') %>/<% execution().id %>/ansible.log.\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.deployment.v1.config_download\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.deployment.v1", "tags": [], "created_at": "2018-06-26 05:45:18", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "744d7589-0dc4-42af-8153-5004309977a7"} > >2018-06-26 11:15:18,626 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:18,627 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.baremetal.v1 >description: TripleO Baremetal Workflows > >workflows: > > set_node_state: > input: > - node_uuid > - state_action > - target_state > - error_states: > # The default includes all failure states, even unused by TripleO. > - 'error' > - 'adopt failed' > - 'clean failed' > - 'deploy failed' > - 'inspect failed' > - 'rescue failed' > > tags: > - tripleo-common-managed > > tasks: > > set_provision_state: > on-success: wait_for_provision_state > on-error: set_provision_state_failed > action: ironic.node_set_provision_state node_uuid=<% $.node_uuid %> state=<% $.state_action %> > > set_provision_state_failed: > publish: > message: <% task(set_provision_state).result %> > on-complete: fail > > wait_for_provision_state: > action: ironic.node_get > input: > node_id: <% $.node_uuid %> > fields: ['provision_state', 'last_error'] > timeout: 1200 #20 minutes > retry: > delay: 3 > count: 400 > continue-on: <% not task().result.provision_state in [$.target_state] + $.error_states %> > on-complete: > - state_not_reached: <% task().result.provision_state != $.target_state %> > > state_not_reached: > publish: > message: >- > Node <% $.node_uuid %> did not reach state "<% $.target_state %>", > the state is "<% task(wait_for_provision_state).result.provision_state %>", > error: <% task(wait_for_provision_state).result.last_error %> > on-complete: fail > > output-on-error: > result: <% $.message %> > > set_power_state: > input: > - node_uuid > - state_action > - target_state > - error_state: 'error' > > tags: > - tripleo-common-managed > > tasks: > > set_power_state: > on-success: wait_for_power_state > on-error: set_power_state_failed > action: ironic.node_set_power_state node_id=<% $.node_uuid %> state=<% $.state_action %> > > set_power_state_failed: > publish: > message: <% task(set_power_state).result %> > on-complete: fail > > wait_for_power_state: > action: ironic.node_get > input: > node_id: <% $.node_uuid %> > fields: ['power_state', 'last_error'] > timeout: 120 #2 minutes > retry: > delay: 6 > count: 20 > continue-on: <% not task().result.power_state in [$.target_state, $.error_state] %> > on-complete: > - state_not_reached: <% task().result.power_state != $.target_state %> > > state_not_reached: > publish: > message: >- > Node <% $.node_uuid %> did not reach power state "<% $.target_state %>", > the state is "<% task(wait_for_power_state).result.power_state %>", > error: <% task(wait_for_power_state).result.last_error %> > on-complete: fail > > output-on-error: > result: <% $.message %> > > manual_cleaning: > input: > - node_uuid > - clean_steps > - timeout: 7200 # 2 hours (cleaning can take really long) > - retry_delay: 10 > - retry_count: 720 > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > set_provision_state: > on-success: wait_for_provision_state > on-error: set_provision_state_failed > action: ironic.node_set_provision_state node_uuid=<% $.node_uuid %> state='clean' cleansteps=<% $.clean_steps %> > > set_provision_state_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(set_provision_state).result %> > > wait_for_provision_state: > on-success: send_message > action: ironic.node_get node_id=<% $.node_uuid %> > timeout: <% $.timeout %> > retry: > delay: <% $.retry_delay %> > count: <% $.retry_count %> > continue-on: <% task().result.provision_state != 'manageable' %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.manual_cleaning > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > validate_nodes: > description: Validate nodes JSON > > input: > - nodes_json > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > validate_nodes: > action: tripleo.baremetal.validate_nodes > on-success: send_message > on-error: validation_failed > input: > nodes_json: <% $.nodes_json %> > > validation_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(validate_nodes).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.validate_nodes > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > register_or_update: > description: Take nodes JSON and create nodes in a "manageable" state > > input: > - nodes_json > - remove: False > - queue_name: tripleo > - kernel_name: null > - ramdisk_name: null > - instance_boot_option: local > - initial_state: manageable > > tags: > - tripleo-common-managed > > tasks: > > validate_input: > workflow: tripleo.baremetal.v1.validate_nodes > on-success: register_or_update_nodes > on-error: validation_failed > input: > nodes_json: <% $.nodes_json %> > queue_name: <% $.queue_name %> > > validation_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(validate_input).result %> > registered_nodes: [] > > register_or_update_nodes: > action: tripleo.baremetal.register_or_update_nodes > on-success: > - set_nodes_managed: <% $.initial_state != "enroll" %> > - send_message: <% $.initial_state = "enroll" %> > on-error: set_status_failed_register_or_update_nodes > input: > nodes_json: <% $.nodes_json %> > remove: <% $.remove %> > kernel_name: <% $.kernel_name %> > ramdisk_name: <% $.ramdisk_name %> > instance_boot_option: <% $.instance_boot_option %> > publish: > registered_nodes: <% task().result %> > new_nodes: <% task().result.where($.provision_state = 'enroll') %> > > set_status_failed_register_or_update_nodes: > on-success: send_message > publish: > status: FAILED > message: <% task(register_or_update_nodes).result %> > registered_nodes: [] > > set_nodes_managed: > on-success: > - set_nodes_available: <% $.initial_state = "available" %> > - send_message: <% $.initial_state != "available" %> > on-error: set_status_failed_nodes_managed > workflow: tripleo.baremetal.v1.manage > input: > node_uuids: <% $.new_nodes.uuid %> > queue_name: <% $.queue_name %> > publish: > status: SUCCESS > message: <% $.new_nodes.len() %> node(s) successfully moved to the "manageable" state. > > set_status_failed_nodes_managed: > on-success: send_message > publish: > status: FAILED > message: <% task(set_nodes_managed).result %> > > set_nodes_available: > on-success: send_message > on-error: set_status_failed_nodes_available > workflow: tripleo.baremetal.v1.provide node_uuids=<% $.new_nodes.uuid %> queue_name=<% $.queue_name %> > publish: > status: SUCCESS > message: <% $.new_nodes.len() %> node(s) successfully moved to the "available" state. > > set_status_failed_nodes_available: > on-success: send_message > publish: > status: FAILED > message: <% task(set_nodes_available).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.register_or_update > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > registered_nodes: <% $.registered_nodes or [] %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > provide: > description: Take a list of nodes and move them to "available" > > input: > - node_uuids > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > set_nodes_available: > on-success: cell_v2_discover_hosts > on-error: set_status_failed_nodes_available > with-items: uuid in <% $.node_uuids %> > workflow: tripleo.baremetal.v1.set_node_state > input: > node_uuid: <% $.uuid %> > queue_name: <% $.queue_name %> > state_action: 'provide' > target_state: 'available' > > set_status_failed_nodes_available: > on-success: send_message > publish: > status: FAILED > message: <% task(set_nodes_available).result %> > > cell_v2_discover_hosts: > on-success: try_power_off > on-error: cell_v2_discover_hosts_failed > workflow: tripleo.baremetal.v1.cellv2_discovery > input: > node_uuids: <% $.node_uuids %> > queue_name: <% $.queue_name %> > timeout: 900 #15 minutes > retry: > delay: 30 > count: 30 > > cell_v2_discover_hosts_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(cell_v2_discover_hosts).result %> > > try_power_off: > on-success: send_message > on-error: power_off_failed > with-items: uuid in <% $.node_uuids %> > workflow: tripleo.baremetal.v1.set_power_state > input: > node_uuid: <% $.uuid %> > queue_name: <% $.queue_name %> > state_action: 'off' > target_state: 'power off' > publish: > status: SUCCESS > message: <% $.node_uuids.len() %> node(s) successfully moved to the "available" state. > > power_off_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(try_power_off).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.provide > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > provide_manageable_nodes: > description: Provide all nodes in a 'manageable' state. > > input: > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > get_manageable_nodes: > action: ironic.node_list maintenance=False associated=False > on-success: provide_manageable > on-error: set_status_failed_get_manageable_nodes > publish: > managed_nodes: <% task().result.where($.provision_state = 'manageable').uuid %> > > set_status_failed_get_manageable_nodes: > on-success: send_message > publish: > status: FAILED > message: <% task(get_manageable_nodes).result %> > > provide_manageable: > on-success: send_message > workflow: tripleo.baremetal.v1.provide > input: > node_uuids: <% $.managed_nodes %> > queue_name: <% $.queue_name %> > publish: > status: SUCCESS > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.provide_manageable_nodes > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > manage: > description: Set a list of nodes to 'manageable' state > > input: > - node_uuids > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > set_nodes_manageable: > on-success: send_message > on-error: set_status_failed_nodes_manageable > with-items: uuid in <% $.node_uuids %> > workflow: tripleo.baremetal.v1.set_node_state > input: > node_uuid: <% $.uuid %> > state_action: 'manage' > target_state: 'manageable' > error_states: > # node going back to enroll designates power credentials failure > - 'enroll' > - 'error' > > set_status_failed_nodes_manageable: > on-success: send_message > publish: > status: FAILED > message: <% task(set_nodes_manageable).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.manage > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > _introspect: > description: > > An internal workflow. The tripleo.baremetal.v1.introspect workflow > should be used for introspection. > > input: > - node_uuid > - timeout > - queue_name > > output: > result: <% task(start_introspection).result %> > > tags: > - tripleo-common-managed > > tasks: > start_introspection: > action: baremetal_introspection.introspect uuid=<% $.node_uuid %> > on-success: wait_for_introspection_to_finish > on-error: set_status_failed_start_introspection > > set_status_failed_start_introspection: > publish: > status: FAILED > message: <% task(start_introspection).result %> > introspected_nodes: [] > on-success: send_message > > wait_for_introspection_to_finish: > action: baremetal_introspection.wait_for_finish > input: > uuids: <% [$.node_uuid] %> > # The interval is 10 seconds, so divide to make the overall timeout > # in seconds correct. > max_retries: <% $.timeout / 10 %> > retry_interval: 10 > publish: > introspected_node: <% task().result.values().first() %> > status: <% bool(task().result.values().first().error) and "FAILED" or "SUCCESS" %> > publish-on-error: > status: FAILED > message: <% task().result %> > on-success: wait_for_introspection_to_finish_success > on-error: wait_for_introspection_to_finish_error > > wait_for_introspection_to_finish_success: > publish: > message: <% "Introspection of node {0} completed. Status:{1}. Errors:{2}".format($.introspected_node.uuid, $.status, $.introspected_node.error) %> > on-success: send_message > > wait_for_introspection_to_finish_error: > publish: > message: <% "Introspection of node {0} timed out.".format($.node_uuid) %> > on-success: send_message > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1._introspect > payload: > status: <% $.status %> > message: <% $.message %> > introspected_node: <% $.get('introspected_node') %> > node_uuid: <% $.node_uuid %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > introspect: > description: > > Take a list of nodes and move them through introspection. > > By default each node will attempt introspection up to 3 times (two > retries plus the initial attemp) if it fails. This behaviour can be > modified by changing the max_retry_attempts input. > > The workflow will assume the node has timed out after 20 minutes (1200 > seconds). This can be changed by passing the node_timeout input in > seconds. > > input: > - node_uuids > - run_validations: False > - queue_name: tripleo > - concurrency: 20 > - max_retry_attempts: 2 > - node_timeout: 1200 > > tags: > - tripleo-common-managed > > task-defaults: > on-error: unhandled_error > > tasks: > initialize: > publish: > introspection_attempt: 1 > on-complete: > - run_validations: <% $.run_validations %> > - introspect_nodes: <% not $.run_validations %> > > run_validations: > workflow: tripleo.validations.v1.run_groups > input: > group_names: > - 'pre-introspection' > queue_name: <% $.queue_name %> > on-success: introspect_nodes > on-error: set_validations_failed > > set_validations_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(run_validations).result %> > > introspect_nodes: > with-items: uuid in <% $.node_uuids %> > concurrency: <% $.concurrency %> > workflow: _introspect > input: > node_uuid: <% $.uuid %> > queue_name: <% $.queue_name %> > timeout: <% $.node_timeout %> > # on-error is triggered if one or more nodes failed introspection. We > # still go to get_introspection_status as it will collect the result > # for each node. Unless we hit the retry limit. > on-error: > - get_introspection_status: <% $.introspection_attempt <= $.max_retry_attempts %> > - max_retry_attempts_reached: <% $.introspection_attempt > $.max_retry_attempts %> > on-success: get_introspection_status > > get_introspection_status: > with-items: uuid in <% $.node_uuids %> > action: baremetal_introspection.get_status > input: > uuid: <% $.uuid %> > publish: > introspected_nodes: <% task().result.toDict($.uuid, $) %> > # Currently there is no way for us to ignore user introspection > # aborts. This means we will retry aborted nodes until the Ironic API > # gives us more details (error code or a boolean to show aborts etc.) > # If a node hasn't finished, we consider it to be failed. > # TODO(d0ugal): When possible, don't retry introspection of nodes > # that a user manually aborted. > failed_introspection: <% task().result.where($.finished = true and $.error != null).select($.uuid) + task().result.where($.finished = false).select($.uuid) %> > publish-on-error: > # If a node fails to start introspection, getting the status can fail. > # When that happens, the result is a string and the nodes need to be > # filtered out. > introspected_nodes: <% task().result.where(isDict($)).toDict($.uuid, $) %> > # If there was an error, the exception string we get doesn't give us > # the UUID. So we use a set difference to find the UUIDs missing in > # the results. These are then added to the failed nodes. > failed_introspection: <% ($.node_uuids.toSet() - task().result.where(isDict($)).select($.uuid).toSet()) + task().result.where(isDict($)).where($.finished = true and $.error != null).toSet() + task().result.where(isDict($)).where($.finished = false).toSet() %> > on-error: increase_attempt_counter > on-success: > - successful_introspection: <% $.failed_introspection.len() = 0 %> > - increase_attempt_counter: <% $.failed_introspection.len() > 0 %> > > increase_attempt_counter: > publish: > introspection_attempt: <% $.introspection_attempt + 1 %> > on-complete: > retry_failed_nodes > > retry_failed_nodes: > publish: > status: RUNNING > message: <% 'Retrying {0} nodes that failed introspection. Attempt {1} of {2} '.format($.failed_introspection.len(), $.introspection_attempt, $.max_retry_attempts + 1) %> > # We are about to retry, update the tracking stats. > node_uuids: <% $.failed_introspection %> > on-success: > - send_message > - introspect_nodes > > max_retry_attempts_reached: > publish: > status: FAILED > message: <% 'Retry limit reached with {0} nodes still failing introspection'.format($.failed_introspection.len()) %> > on-complete: send_message > > successful_introspection: > publish: > status: SUCCESS > message: Successfully introspected <% $.introspected_nodes.len() %> node(s). > on-complete: send_message > > unhandled_error: > publish: > status: FAILED > message: "Unhandled workflow error" > on-complete: send_message > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.introspect > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > introspected_nodes: <% $.get('introspected_nodes', []) %> > failed_introspection: <% $.get('failed_introspection', []) %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > introspect_manageable_nodes: > description: Introspect all nodes in a 'manageable' state. > > input: > - run_validations: False > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > get_manageable_nodes: > action: ironic.node_list maintenance=False associated=False > on-success: validate_nodes > on-error: set_status_failed_get_manageable_nodes > publish: > managed_nodes: <% task().result.where($.provision_state = 'manageable').uuid %> > > set_status_failed_get_manageable_nodes: > on-success: send_message > publish: > status: FAILED > message: <% task(get_manageable_nodes).result %> > > validate_nodes: > on-success: > - introspect_manageable: <% $.managed_nodes.len() > 0 %> > - set_status_failed_no_nodes: <% $.managed_nodes.len() = 0 %> > > set_status_failed_no_nodes: > on-success: send_message > publish: > status: FAILED > message: No manageable nodes to introspect. Check node states and maintenance. > > introspect_manageable: > on-success: send_message > on-error: set_status_introspect_manageable > workflow: tripleo.baremetal.v1.introspect > input: > node_uuids: <% $.managed_nodes %> > run_validations: <% $.run_validations %> > queue_name: <% $.queue_name %> > publish: > introspected_nodes: <% task().result.introspected_nodes %> > > set_status_introspect_manageable: > on-success: send_message > publish: > status: FAILED > message: <% task(introspect_manageable).result %> > introspected_nodes: [] > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.introspect_manageable_nodes > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > introspected_nodes: <% $.get('introspected_nodes', []) %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > configure: > description: Take a list of manageable nodes and update their boot configuration. > > input: > - node_uuids > - queue_name: tripleo > - kernel_name: bm-deploy-kernel > - ramdisk_name: bm-deploy-ramdisk > - instance_boot_option: null > - root_device: null > - root_device_minimum_size: 4 > - overwrite_root_device_hints: False > > tags: > - tripleo-common-managed > > tasks: > > configure_boot: > on-success: configure_root_device > on-error: set_status_failed_configure_boot > with-items: node_uuid in <% $.node_uuids %> > action: tripleo.baremetal.configure_boot node_uuid=<% $.node_uuid %> kernel_name=<% $.kernel_name %> ramdisk_name=<% $.ramdisk_name %> instance_boot_option=<% $.instance_boot_option %> > > configure_root_device: > on-success: send_message > on-error: set_status_failed_configure_root_device > with-items: node_uuid in <% $.node_uuids %> > action: tripleo.baremetal.configure_root_device node_uuid=<% $.node_uuid %> root_device=<% $.root_device %> minimum_size=<% $.root_device_minimum_size %> overwrite=<% $.overwrite_root_device_hints %> > publish: > status: SUCCESS > message: 'Successfully configured the nodes.' > > set_status_failed_configure_boot: > on-success: send_message > publish: > status: FAILED > message: <% task(configure_boot).result %> > > set_status_failed_configure_root_device: > on-success: send_message > publish: > status: FAILED > message: <% task(configure_root_device).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.configure > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > configure_manageable_nodes: > description: Update the boot configuration of all nodes in 'manageable' state. > > input: > - queue_name: tripleo > - kernel_name: 'bm-deploy-kernel' > - ramdisk_name: 'bm-deploy-ramdisk' > - instance_boot_option: null > - root_device: null > - root_device_minimum_size: 4 > - overwrite_root_device_hints: False > > tags: > - tripleo-common-managed > > tasks: > > get_manageable_nodes: > action: ironic.node_list maintenance=False associated=False > on-success: configure_manageable > on-error: set_status_failed_get_manageable_nodes > publish: > managed_nodes: <% task().result.where($.provision_state = 'manageable').uuid %> > > configure_manageable: > on-success: send_message > on-error: set_status_failed_configure_manageable > workflow: tripleo.baremetal.v1.configure > input: > node_uuids: <% $.managed_nodes %> > queue_name: <% $.queue_name %> > kernel_name: <% $.kernel_name %> > ramdisk_name: <% $.ramdisk_name %> > instance_boot_option: <% $.instance_boot_option %> > root_device: <% $.root_device %> > root_device_minimum_size: <% $.root_device_minimum_size %> > overwrite_root_device_hints: <% $.overwrite_root_device_hints %> > publish: > message: 'Manageable nodes configured successfully.' > > set_status_failed_configure_manageable: > on-success: send_message > publish: > status: FAILED > message: <% task(configure_manageable).result %> > > set_status_failed_get_manageable_nodes: > on-success: send_message > publish: > status: FAILED > message: <% task(get_manageable_nodes).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.configure_manageable_nodes > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > tag_node: > description: Tag a node with a role > input: > - node_uuid > - role: null > - queue_name: tripleo > > task-defaults: > on-error: send_message > > tags: > - tripleo-common-managed > > tasks: > > update_node: > on-success: send_message > action: tripleo.baremetal.update_node_capability node_uuid=<% $.node_uuid %> capability='profile' value=<% $.role %> > publish: > message: <% task().result %> > status: SUCCESS > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.tag_node > payload: > status: <% $.get('status', 'FAILED') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > tag_nodes: > description: Runs the tag_node workflow in a loop > input: > - tag_node_uuids > - untag_node_uuids > - role > - plan: overcloud > - queue_name: tripleo > > task-defaults: > on-error: send_message > > tags: > - tripleo-common-managed > > tasks: > > tag_nodes: > with-items: node_uuid in <% $.tag_node_uuids %> > workflow: tripleo.baremetal.v1.tag_node > input: > node_uuid: <% $.node_uuid %> > queue_name: <% $.queue_name %> > role: <% $.role %> > concurrency: 1 > on-success: untag_nodes > > untag_nodes: > with-items: node_uuid in <% $.untag_node_uuids %> > workflow: tripleo.baremetal.v1.tag_node > input: > node_uuid: <% $.node_uuid %> > queue_name: <% $.queue_name %> > concurrency: 1 > on-success: update_role_parameters > > update_role_parameters: > on-success: send_message > action: tripleo.parameters.update_role role=<% $.role %> container=<% $.plan %> > publish: > message: <% task().result %> > status: SUCCESS > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.tag_nodes > payload: > status: <% $.get('status', 'FAILED') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > nodes_with_profile: > description: Find nodes with a specific profile > input: > - profile > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > get_active_nodes: > action: ironic.node_list maintenance=false provision_state='active' detail=true > on-success: get_available_nodes > on-error: set_status_failed_get_active_nodes > > get_available_nodes: > action: ironic.node_list maintenance=false provision_state='available' detail=true > on-success: get_matching_nodes > on-error: set_status_failed_get_available_nodes > > get_matching_nodes: > with-items: node in <% task(get_available_nodes).result + task(get_active_nodes).result %> > action: tripleo.baremetal.get_profile node=<% $.node %> > on-success: send_message > on-error: set_status_failed_get_matching_nodes > publish: > matching_nodes: <% let(input_profile_name => $.profile) -> task().result.where($.profile = $input_profile_name).uuid %> > > set_status_failed_get_active_nodes: > on-success: send_message > publish: > status: FAILED > message: <% task(get_active_nodes).result %> > > set_status_failed_get_available_nodes: > on-success: send_message > publish: > status: FAILED > message: <% task(get_available_nodes).result %> > > set_status_failed_get_matching_nodes: > on-success: send_message > publish: > status: FAILED > message: <% task(get_matching_nodes).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.nodes_with_profile > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > matching_nodes: <% $.matching_nodes or [] %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > create_raid_configuration: > description: Create and apply RAID configuration for given nodes > input: > - node_uuids > - configuration > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > set_configuration: > with-items: node_uuid in <% $.node_uuids %> > action: ironic.node_set_target_raid_config node_ident=<% $.node_uuid %> target_raid_config=<% $.configuration %> > on-success: apply_configuration > on-error: set_configuration_failed > > set_configuration_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(set_configuration).result %> > > apply_configuration: > with-items: node_uuid in <% $.node_uuids %> > workflow: tripleo.baremetal.v1.manual_cleaning > input: > node_uuid: <% $.node_uuid %> > clean_steps: > - interface: raid > step: delete_configuration > - interface: raid > step: create_configuration > timeout: 1800 # building RAID should be fast than general cleaning > retry_count: 180 > retry_delay: 10 > on-success: send_message > on-error: apply_configuration_failed > publish: > message: <% task().result %> > status: SUCCESS > > apply_configuration_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(apply_configuration).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.create_raid_configuration > payload: > status: <% $.get('status', 'FAILED') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > > cellv2_discovery: > description: Run cell_v2 host discovery > > input: > - node_uuids > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > cell_v2_discover_hosts: > on-success: wait_for_nova_resources > on-error: cell_v2_discover_hosts_failed > action: tripleo.baremetal.cell_v2_discover_hosts > > cell_v2_discover_hosts_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(cell_v2_discover_hosts).result %> > > wait_for_nova_resources: > on-success: send_message > on-error: wait_for_nova_resources_failed > with-items: node_uuid in <% $.node_uuids %> > action: nova.hypervisors_find hypervisor_hostname=<% $.node_uuid %> > > wait_for_nova_resources_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(wait_for_nova_resources).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.cellv2_discovery > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > > discover_nodes: > description: Run nodes discovery over the given IP range > > input: > - ip_addresses > - credentials > - ports: [623] > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > get_all_nodes: > action: ironic.node_list > input: > fields: ["uuid", "driver", "driver_info"] > limit: 0 > on-success: get_candidate_nodes > on-error: get_all_nodes_failed > publish: > existing_nodes: <% task().result %> > > get_all_nodes_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(get_all_nodes).result %> > > get_candidate_nodes: > action: tripleo.baremetal.get_candidate_nodes > input: > ip_addresses: <% $.ip_addresses %> > credentials: <% $.credentials %> > ports: <% $.ports %> > existing_nodes: <% $.existing_nodes %> > on-success: probe_nodes > on-error: get_candidate_nodes_failed > publish: > candidates: <% task().result %> > > get_candidate_nodes_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(get_candidate_nodes).result %> > > probe_nodes: > action: tripleo.baremetal.probe_node > on-success: send_message > on-error: probe_nodes_failed > input: > ip: <% $.node.ip %> > port: <% $.node.port %> > username: <% $.node.username %> > password: <% $.node.password %> > with-items: > - node in <% $.candidates %> > publish: > nodes_json: <% task().result.where($ != null) %> > > probe_nodes_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(probe_nodes).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.discover_nodes > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > nodes_json: <% $.get('nodes_json', []) %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > discover_and_enroll_nodes: > description: Run nodes discovery over the given IP range and enroll nodes > > input: > - ip_addresses > - credentials > - ports: [623] > - kernel_name: null > - ramdisk_name: null > - instance_boot_option: local > - initial_state: manageable > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > discover_nodes: > workflow: tripleo.baremetal.v1.discover_nodes > input: > ip_addresses: <% $.ip_addresses %> > ports: <% $.ports %> > credentials: <% $.credentials %> > queue_name: <% $.queue_name %> > on-success: enroll_nodes > on-error: discover_nodes_failed > publish: > nodes_json: <% task().result.nodes_json %> > > discover_nodes_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(discover_nodes).result %> > > enroll_nodes: > workflow: tripleo.baremetal.v1.register_or_update > input: > nodes_json: <% $.nodes_json %> > kernel_name: <% $.kernel_name %> > ramdisk_name: <% $.ramdisk_name %> > instance_boot_option: <% $.instance_boot_option %> > initial_state: <% $.initial_state %> > on-success: send_message > on-error: enroll_nodes_failed > publish: > registered_nodes: <% task().result.registered_nodes %> > > enroll_nodes_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(enroll_nodes).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.baremetal.v1.discover_and_enroll_nodes > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > registered_nodes: <% $.get('registered_nodes', []) %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 11:15:21,184 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 43222 >2018-06-26 11:15:21,224 DEBUG: RESP: [201] Content-Length: 43222 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:21 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.baremetal.v1\ndescription: TripleO Baremetal Workflows\n\nworkflows:\n\n set_node_state:\n input:\n - node_uuid\n - state_action\n - target_state\n - error_states:\n # The default includes all failure states, even unused by TripleO.\n - 'error'\n - 'adopt failed'\n - 'clean failed'\n - 'deploy failed'\n - 'inspect failed'\n - 'rescue failed'\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_provision_state:\n on-success: wait_for_provision_state\n on-error: set_provision_state_failed\n action: ironic.node_set_provision_state node_uuid=<% $.node_uuid %> state=<% $.state_action %>\n\n set_provision_state_failed:\n publish:\n message: <% task(set_provision_state).result %>\n on-complete: fail\n\n wait_for_provision_state:\n action: ironic.node_get\n input:\n node_id: <% $.node_uuid %>\n fields: ['provision_state', 'last_error']\n timeout: 1200 #20 minutes\n retry:\n delay: 3\n count: 400\n continue-on: <% not task().result.provision_state in [$.target_state] + $.error_states %>\n on-complete:\n - state_not_reached: <% task().result.provision_state != $.target_state %>\n\n state_not_reached:\n publish:\n message: >-\n Node <% $.node_uuid %> did not reach state \"<% $.target_state %>\",\n the state is \"<% task(wait_for_provision_state).result.provision_state %>\",\n error: <% task(wait_for_provision_state).result.last_error %>\n on-complete: fail\n\n output-on-error:\n result: <% $.message %>\n\n set_power_state:\n input:\n - node_uuid\n - state_action\n - target_state\n - error_state: 'error'\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_power_state:\n on-success: wait_for_power_state\n on-error: set_power_state_failed\n action: ironic.node_set_power_state node_id=<% $.node_uuid %> state=<% $.state_action %>\n\n set_power_state_failed:\n publish:\n message: <% task(set_power_state).result %>\n on-complete: fail\n\n wait_for_power_state:\n action: ironic.node_get\n input:\n node_id: <% $.node_uuid %>\n fields: ['power_state', 'last_error']\n timeout: 120 #2 minutes\n retry:\n delay: 6\n count: 20\n continue-on: <% not task().result.power_state in [$.target_state, $.error_state] %>\n on-complete:\n - state_not_reached: <% task().result.power_state != $.target_state %>\n\n state_not_reached:\n publish:\n message: >-\n Node <% $.node_uuid %> did not reach power state \"<% $.target_state %>\",\n the state is \"<% task(wait_for_power_state).result.power_state %>\",\n error: <% task(wait_for_power_state).result.last_error %>\n on-complete: fail\n\n output-on-error:\n result: <% $.message %>\n\n manual_cleaning:\n input:\n - node_uuid\n - clean_steps\n - timeout: 7200 # 2 hours (cleaning can take really long)\n - retry_delay: 10\n - retry_count: 720\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_provision_state:\n on-success: wait_for_provision_state\n on-error: set_provision_state_failed\n action: ironic.node_set_provision_state node_uuid=<% $.node_uuid %> state='clean' cleansteps=<% $.clean_steps %>\n\n set_provision_state_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_provision_state).result %>\n\n wait_for_provision_state:\n on-success: send_message\n action: ironic.node_get node_id=<% $.node_uuid %>\n timeout: <% $.timeout %>\n retry:\n delay: <% $.retry_delay %>\n count: <% $.retry_count %>\n continue-on: <% task().result.provision_state != 'manageable' %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.manual_cleaning\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n validate_nodes:\n description: Validate nodes JSON\n\n input:\n - nodes_json\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n validate_nodes:\n action: tripleo.baremetal.validate_nodes\n on-success: send_message\n on-error: validation_failed\n input:\n nodes_json: <% $.nodes_json %>\n\n validation_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(validate_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.validate_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n register_or_update:\n description: Take nodes JSON and create nodes in a \"manageable\" state\n\n input:\n - nodes_json\n - remove: False\n - queue_name: tripleo\n - kernel_name: null\n - ramdisk_name: null\n - instance_boot_option: local\n - initial_state: manageable\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n validate_input:\n workflow: tripleo.baremetal.v1.validate_nodes\n on-success: register_or_update_nodes\n on-error: validation_failed\n input:\n nodes_json: <% $.nodes_json %>\n queue_name: <% $.queue_name %>\n\n validation_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(validate_input).result %>\n registered_nodes: []\n\n register_or_update_nodes:\n action: tripleo.baremetal.register_or_update_nodes\n on-success:\n - set_nodes_managed: <% $.initial_state != \"enroll\" %>\n - send_message: <% $.initial_state = \"enroll\" %>\n on-error: set_status_failed_register_or_update_nodes\n input:\n nodes_json: <% $.nodes_json %>\n remove: <% $.remove %>\n kernel_name: <% $.kernel_name %>\n ramdisk_name: <% $.ramdisk_name %>\n instance_boot_option: <% $.instance_boot_option %>\n publish:\n registered_nodes: <% task().result %>\n new_nodes: <% task().result.where($.provision_state = 'enroll') %>\n\n set_status_failed_register_or_update_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(register_or_update_nodes).result %>\n registered_nodes: []\n\n set_nodes_managed:\n on-success:\n - set_nodes_available: <% $.initial_state = \"available\" %>\n - send_message: <% $.initial_state != \"available\" %>\n on-error: set_status_failed_nodes_managed\n workflow: tripleo.baremetal.v1.manage\n input:\n node_uuids: <% $.new_nodes.uuid %>\n queue_name: <% $.queue_name %>\n publish:\n status: SUCCESS\n message: <% $.new_nodes.len() %> node(s) successfully moved to the \"manageable\" state.\n\n set_status_failed_nodes_managed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_nodes_managed).result %>\n\n set_nodes_available:\n on-success: send_message\n on-error: set_status_failed_nodes_available\n workflow: tripleo.baremetal.v1.provide node_uuids=<% $.new_nodes.uuid %> queue_name=<% $.queue_name %>\n publish:\n status: SUCCESS\n message: <% $.new_nodes.len() %> node(s) successfully moved to the \"available\" state.\n\n set_status_failed_nodes_available:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_nodes_available).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.register_or_update\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n registered_nodes: <% $.registered_nodes or [] %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n provide:\n description: Take a list of nodes and move them to \"available\"\n\n input:\n - node_uuids\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_nodes_available:\n on-success: cell_v2_discover_hosts\n on-error: set_status_failed_nodes_available\n with-items: uuid in <% $.node_uuids %>\n workflow: tripleo.baremetal.v1.set_node_state\n input:\n node_uuid: <% $.uuid %>\n queue_name: <% $.queue_name %>\n state_action: 'provide'\n target_state: 'available'\n\n set_status_failed_nodes_available:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_nodes_available).result %>\n\n cell_v2_discover_hosts:\n on-success: try_power_off\n on-error: cell_v2_discover_hosts_failed\n workflow: tripleo.baremetal.v1.cellv2_discovery\n input:\n node_uuids: <% $.node_uuids %>\n queue_name: <% $.queue_name %>\n timeout: 900 #15 minutes\n retry:\n delay: 30\n count: 30\n\n cell_v2_discover_hosts_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(cell_v2_discover_hosts).result %>\n\n try_power_off:\n on-success: send_message\n on-error: power_off_failed\n with-items: uuid in <% $.node_uuids %>\n workflow: tripleo.baremetal.v1.set_power_state\n input:\n node_uuid: <% $.uuid %>\n queue_name: <% $.queue_name %>\n state_action: 'off'\n target_state: 'power off'\n publish:\n status: SUCCESS\n message: <% $.node_uuids.len() %> node(s) successfully moved to the \"available\" state.\n\n power_off_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(try_power_off).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.provide\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n provide_manageable_nodes:\n description: Provide all nodes in a 'manageable' state.\n\n input:\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_manageable_nodes:\n action: ironic.node_list maintenance=False associated=False\n on-success: provide_manageable\n on-error: set_status_failed_get_manageable_nodes\n publish:\n managed_nodes: <% task().result.where($.provision_state = 'manageable').uuid %>\n\n set_status_failed_get_manageable_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_manageable_nodes).result %>\n\n provide_manageable:\n on-success: send_message\n workflow: tripleo.baremetal.v1.provide\n input:\n node_uuids: <% $.managed_nodes %>\n queue_name: <% $.queue_name %>\n publish:\n status: SUCCESS\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.provide_manageable_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n manage:\n description: Set a list of nodes to 'manageable' state\n\n input:\n - node_uuids\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_nodes_manageable:\n on-success: send_message\n on-error: set_status_failed_nodes_manageable\n with-items: uuid in <% $.node_uuids %>\n workflow: tripleo.baremetal.v1.set_node_state\n input:\n node_uuid: <% $.uuid %>\n state_action: 'manage'\n target_state: 'manageable'\n error_states:\n # node going back to enroll designates power credentials failure\n - 'enroll'\n - 'error'\n\n set_status_failed_nodes_manageable:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_nodes_manageable).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.manage\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n _introspect:\n description: >\n An internal workflow. The tripleo.baremetal.v1.introspect workflow\n should be used for introspection.\n\n input:\n - node_uuid\n - timeout\n - queue_name\n\n output:\n result: <% task(start_introspection).result %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n start_introspection:\n action: baremetal_introspection.introspect uuid=<% $.node_uuid %>\n on-success: wait_for_introspection_to_finish\n on-error: set_status_failed_start_introspection\n\n set_status_failed_start_introspection:\n publish:\n status: FAILED\n message: <% task(start_introspection).result %>\n introspected_nodes: []\n on-success: send_message\n\n wait_for_introspection_to_finish:\n action: baremetal_introspection.wait_for_finish\n input:\n uuids: <% [$.node_uuid] %>\n # The interval is 10 seconds, so divide to make the overall timeout\n # in seconds correct.\n max_retries: <% $.timeout / 10 %>\n retry_interval: 10\n publish:\n introspected_node: <% task().result.values().first() %>\n status: <% bool(task().result.values().first().error) and \"FAILED\" or \"SUCCESS\" %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-success: wait_for_introspection_to_finish_success\n on-error: wait_for_introspection_to_finish_error\n\n wait_for_introspection_to_finish_success:\n publish:\n message: <% \"Introspection of node {0} completed. Status:{1}. Errors:{2}\".format($.introspected_node.uuid, $.status, $.introspected_node.error) %>\n on-success: send_message\n\n wait_for_introspection_to_finish_error:\n publish:\n message: <% \"Introspection of node {0} timed out.\".format($.node_uuid) %>\n on-success: send_message\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1._introspect\n payload:\n status: <% $.status %>\n message: <% $.message %>\n introspected_node: <% $.get('introspected_node') %>\n node_uuid: <% $.node_uuid %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n introspect:\n description: >\n Take a list of nodes and move them through introspection.\n\n By default each node will attempt introspection up to 3 times (two\n retries plus the initial attemp) if it fails. This behaviour can be\n modified by changing the max_retry_attempts input.\n\n The workflow will assume the node has timed out after 20 minutes (1200\n seconds). This can be changed by passing the node_timeout input in\n seconds.\n\n input:\n - node_uuids\n - run_validations: False\n - queue_name: tripleo\n - concurrency: 20\n - max_retry_attempts: 2\n - node_timeout: 1200\n\n tags:\n - tripleo-common-managed\n\n task-defaults:\n on-error: unhandled_error\n\n tasks:\n initialize:\n publish:\n introspection_attempt: 1\n on-complete:\n - run_validations: <% $.run_validations %>\n - introspect_nodes: <% not $.run_validations %>\n\n run_validations:\n workflow: tripleo.validations.v1.run_groups\n input:\n group_names:\n - 'pre-introspection'\n queue_name: <% $.queue_name %>\n on-success: introspect_nodes\n on-error: set_validations_failed\n\n set_validations_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(run_validations).result %>\n\n introspect_nodes:\n with-items: uuid in <% $.node_uuids %>\n concurrency: <% $.concurrency %>\n workflow: _introspect\n input:\n node_uuid: <% $.uuid %>\n queue_name: <% $.queue_name %>\n timeout: <% $.node_timeout %>\n # on-error is triggered if one or more nodes failed introspection. We\n # still go to get_introspection_status as it will collect the result\n # for each node. Unless we hit the retry limit.\n on-error:\n - get_introspection_status: <% $.introspection_attempt <= $.max_retry_attempts %>\n - max_retry_attempts_reached: <% $.introspection_attempt > $.max_retry_attempts %>\n on-success: get_introspection_status\n\n get_introspection_status:\n with-items: uuid in <% $.node_uuids %>\n action: baremetal_introspection.get_status\n input:\n uuid: <% $.uuid %>\n publish:\n introspected_nodes: <% task().result.toDict($.uuid, $) %>\n # Currently there is no way for us to ignore user introspection\n # aborts. This means we will retry aborted nodes until the Ironic API\n # gives us more details (error code or a boolean to show aborts etc.)\n # If a node hasn't finished, we consider it to be failed.\n # TODO(d0ugal): When possible, don't retry introspection of nodes\n # that a user manually aborted.\n failed_introspection: <% task().result.where($.finished = true and $.error != null).select($.uuid) + task().result.where($.finished = false).select($.uuid) %>\n publish-on-error:\n # If a node fails to start introspection, getting the status can fail.\n # When that happens, the result is a string and the nodes need to be\n # filtered out.\n introspected_nodes: <% task().result.where(isDict($)).toDict($.uuid, $) %>\n # If there was an error, the exception string we get doesn't give us\n # the UUID. So we use a set difference to find the UUIDs missing in\n # the results. These are then added to the failed nodes.\n failed_introspection: <% ($.node_uuids.toSet() - task().result.where(isDict($)).select($.uuid).toSet()) + task().result.where(isDict($)).where($.finished = true and $.error != null).toSet() + task().result.where(isDict($)).where($.finished = false).toSet() %>\n on-error: increase_attempt_counter\n on-success:\n - successful_introspection: <% $.failed_introspection.len() = 0 %>\n - increase_attempt_counter: <% $.failed_introspection.len() > 0 %>\n\n increase_attempt_counter:\n publish:\n introspection_attempt: <% $.introspection_attempt + 1 %>\n on-complete:\n retry_failed_nodes\n\n retry_failed_nodes:\n publish:\n status: RUNNING\n message: <% 'Retrying {0} nodes that failed introspection. Attempt {1} of {2} '.format($.failed_introspection.len(), $.introspection_attempt, $.max_retry_attempts + 1) %>\n # We are about to retry, update the tracking stats.\n node_uuids: <% $.failed_introspection %>\n on-success:\n - send_message\n - introspect_nodes\n\n max_retry_attempts_reached:\n publish:\n status: FAILED\n message: <% 'Retry limit reached with {0} nodes still failing introspection'.format($.failed_introspection.len()) %>\n on-complete: send_message\n\n successful_introspection:\n publish:\n status: SUCCESS\n message: Successfully introspected <% $.introspected_nodes.len() %> node(s).\n on-complete: send_message\n\n unhandled_error:\n publish:\n status: FAILED\n message: \"Unhandled workflow error\"\n on-complete: send_message\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.introspect\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n introspected_nodes: <% $.get('introspected_nodes', []) %>\n failed_introspection: <% $.get('failed_introspection', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n introspect_manageable_nodes:\n description: Introspect all nodes in a 'manageable' state.\n\n input:\n - run_validations: False\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_manageable_nodes:\n action: ironic.node_list maintenance=False associated=False\n on-success: validate_nodes\n on-error: set_status_failed_get_manageable_nodes\n publish:\n managed_nodes: <% task().result.where($.provision_state = 'manageable').uuid %>\n\n set_status_failed_get_manageable_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_manageable_nodes).result %>\n\n validate_nodes:\n on-success:\n - introspect_manageable: <% $.managed_nodes.len() > 0 %>\n - set_status_failed_no_nodes: <% $.managed_nodes.len() = 0 %>\n\n set_status_failed_no_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: No manageable nodes to introspect. Check node states and maintenance.\n\n introspect_manageable:\n on-success: send_message\n on-error: set_status_introspect_manageable\n workflow: tripleo.baremetal.v1.introspect\n input:\n node_uuids: <% $.managed_nodes %>\n run_validations: <% $.run_validations %>\n queue_name: <% $.queue_name %>\n publish:\n introspected_nodes: <% task().result.introspected_nodes %>\n\n set_status_introspect_manageable:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(introspect_manageable).result %>\n introspected_nodes: []\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.introspect_manageable_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n introspected_nodes: <% $.get('introspected_nodes', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n configure:\n description: Take a list of manageable nodes and update their boot configuration.\n\n input:\n - node_uuids\n - queue_name: tripleo\n - kernel_name: bm-deploy-kernel\n - ramdisk_name: bm-deploy-ramdisk\n - instance_boot_option: null\n - root_device: null\n - root_device_minimum_size: 4\n - overwrite_root_device_hints: False\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n configure_boot:\n on-success: configure_root_device\n on-error: set_status_failed_configure_boot\n with-items: node_uuid in <% $.node_uuids %>\n action: tripleo.baremetal.configure_boot node_uuid=<% $.node_uuid %> kernel_name=<% $.kernel_name %> ramdisk_name=<% $.ramdisk_name %> instance_boot_option=<% $.instance_boot_option %>\n\n configure_root_device:\n on-success: send_message\n on-error: set_status_failed_configure_root_device\n with-items: node_uuid in <% $.node_uuids %>\n action: tripleo.baremetal.configure_root_device node_uuid=<% $.node_uuid %> root_device=<% $.root_device %> minimum_size=<% $.root_device_minimum_size %> overwrite=<% $.overwrite_root_device_hints %>\n publish:\n status: SUCCESS\n message: 'Successfully configured the nodes.'\n\n set_status_failed_configure_boot:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(configure_boot).result %>\n\n set_status_failed_configure_root_device:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(configure_root_device).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.configure\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n configure_manageable_nodes:\n description: Update the boot configuration of all nodes in 'manageable' state.\n\n input:\n - queue_name: tripleo\n - kernel_name: 'bm-deploy-kernel'\n - ramdisk_name: 'bm-deploy-ramdisk'\n - instance_boot_option: null\n - root_device: null\n - root_device_minimum_size: 4\n - overwrite_root_device_hints: False\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_manageable_nodes:\n action: ironic.node_list maintenance=False associated=False\n on-success: configure_manageable\n on-error: set_status_failed_get_manageable_nodes\n publish:\n managed_nodes: <% task().result.where($.provision_state = 'manageable').uuid %>\n\n configure_manageable:\n on-success: send_message\n on-error: set_status_failed_configure_manageable\n workflow: tripleo.baremetal.v1.configure\n input:\n node_uuids: <% $.managed_nodes %>\n queue_name: <% $.queue_name %>\n kernel_name: <% $.kernel_name %>\n ramdisk_name: <% $.ramdisk_name %>\n instance_boot_option: <% $.instance_boot_option %>\n root_device: <% $.root_device %>\n root_device_minimum_size: <% $.root_device_minimum_size %>\n overwrite_root_device_hints: <% $.overwrite_root_device_hints %>\n publish:\n message: 'Manageable nodes configured successfully.'\n\n set_status_failed_configure_manageable:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(configure_manageable).result %>\n\n set_status_failed_get_manageable_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_manageable_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.configure_manageable_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n tag_node:\n description: Tag a node with a role\n input:\n - node_uuid\n - role: null\n - queue_name: tripleo\n\n task-defaults:\n on-error: send_message\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n update_node:\n on-success: send_message\n action: tripleo.baremetal.update_node_capability node_uuid=<% $.node_uuid %> capability='profile' value=<% $.role %>\n publish:\n message: <% task().result %>\n status: SUCCESS\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.tag_node\n payload:\n status: <% $.get('status', 'FAILED') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n tag_nodes:\n description: Runs the tag_node workflow in a loop\n input:\n - tag_node_uuids\n - untag_node_uuids\n - role\n - plan: overcloud\n - queue_name: tripleo\n\n task-defaults:\n on-error: send_message\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n tag_nodes:\n with-items: node_uuid in <% $.tag_node_uuids %>\n workflow: tripleo.baremetal.v1.tag_node\n input:\n node_uuid: <% $.node_uuid %>\n queue_name: <% $.queue_name %>\n role: <% $.role %>\n concurrency: 1\n on-success: untag_nodes\n\n untag_nodes:\n with-items: node_uuid in <% $.untag_node_uuids %>\n workflow: tripleo.baremetal.v1.tag_node\n input:\n node_uuid: <% $.node_uuid %>\n queue_name: <% $.queue_name %>\n concurrency: 1\n on-success: update_role_parameters\n\n update_role_parameters:\n on-success: send_message\n action: tripleo.parameters.update_role role=<% $.role %> container=<% $.plan %>\n publish:\n message: <% task().result %>\n status: SUCCESS\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.tag_nodes\n payload:\n status: <% $.get('status', 'FAILED') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n nodes_with_profile:\n description: Find nodes with a specific profile\n input:\n - profile\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_active_nodes:\n action: ironic.node_list maintenance=false provision_state='active' detail=true\n on-success: get_available_nodes\n on-error: set_status_failed_get_active_nodes\n\n get_available_nodes:\n action: ironic.node_list maintenance=false provision_state='available' detail=true\n on-success: get_matching_nodes\n on-error: set_status_failed_get_available_nodes\n\n get_matching_nodes:\n with-items: node in <% task(get_available_nodes).result + task(get_active_nodes).result %>\n action: tripleo.baremetal.get_profile node=<% $.node %>\n on-success: send_message\n on-error: set_status_failed_get_matching_nodes\n publish:\n matching_nodes: <% let(input_profile_name => $.profile) -> task().result.where($.profile = $input_profile_name).uuid %>\n\n set_status_failed_get_active_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_active_nodes).result %>\n\n set_status_failed_get_available_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_available_nodes).result %>\n\n set_status_failed_get_matching_nodes:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_matching_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.nodes_with_profile\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n matching_nodes: <% $.matching_nodes or [] %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n create_raid_configuration:\n description: Create and apply RAID configuration for given nodes\n input:\n - node_uuids\n - configuration\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n set_configuration:\n with-items: node_uuid in <% $.node_uuids %>\n action: ironic.node_set_target_raid_config node_ident=<% $.node_uuid %> target_raid_config=<% $.configuration %>\n on-success: apply_configuration\n on-error: set_configuration_failed\n\n set_configuration_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(set_configuration).result %>\n\n apply_configuration:\n with-items: node_uuid in <% $.node_uuids %>\n workflow: tripleo.baremetal.v1.manual_cleaning\n input:\n node_uuid: <% $.node_uuid %>\n clean_steps:\n - interface: raid\n step: delete_configuration\n - interface: raid\n step: create_configuration\n timeout: 1800 # building RAID should be fast than general cleaning\n retry_count: 180\n retry_delay: 10\n on-success: send_message\n on-error: apply_configuration_failed\n publish:\n message: <% task().result %>\n status: SUCCESS\n\n apply_configuration_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(apply_configuration).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.create_raid_configuration\n payload:\n status: <% $.get('status', 'FAILED') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n\n cellv2_discovery:\n description: Run cell_v2 host discovery\n\n input:\n - node_uuids\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n cell_v2_discover_hosts:\n on-success: wait_for_nova_resources\n on-error: cell_v2_discover_hosts_failed\n action: tripleo.baremetal.cell_v2_discover_hosts\n\n cell_v2_discover_hosts_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(cell_v2_discover_hosts).result %>\n\n wait_for_nova_resources:\n on-success: send_message\n on-error: wait_for_nova_resources_failed\n with-items: node_uuid in <% $.node_uuids %>\n action: nova.hypervisors_find hypervisor_hostname=<% $.node_uuid %>\n\n wait_for_nova_resources_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(wait_for_nova_resources).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.cellv2_discovery\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n\n discover_nodes:\n description: Run nodes discovery over the given IP range\n\n input:\n - ip_addresses\n - credentials\n - ports: [623]\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n get_all_nodes:\n action: ironic.node_list\n input:\n fields: [\"uuid\", \"driver\", \"driver_info\"]\n limit: 0\n on-success: get_candidate_nodes\n on-error: get_all_nodes_failed\n publish:\n existing_nodes: <% task().result %>\n\n get_all_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_all_nodes).result %>\n\n get_candidate_nodes:\n action: tripleo.baremetal.get_candidate_nodes\n input:\n ip_addresses: <% $.ip_addresses %>\n credentials: <% $.credentials %>\n ports: <% $.ports %>\n existing_nodes: <% $.existing_nodes %>\n on-success: probe_nodes\n on-error: get_candidate_nodes_failed\n publish:\n candidates: <% task().result %>\n\n get_candidate_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_candidate_nodes).result %>\n\n probe_nodes:\n action: tripleo.baremetal.probe_node\n on-success: send_message\n on-error: probe_nodes_failed\n input:\n ip: <% $.node.ip %>\n port: <% $.node.port %>\n username: <% $.node.username %>\n password: <% $.node.password %>\n with-items:\n - node in <% $.candidates %>\n publish:\n nodes_json: <% task().result.where($ != null) %>\n\n probe_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(probe_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.discover_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n nodes_json: <% $.get('nodes_json', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n discover_and_enroll_nodes:\n description: Run nodes discovery over the given IP range and enroll nodes\n\n input:\n - ip_addresses\n - credentials\n - ports: [623]\n - kernel_name: null\n - ramdisk_name: null\n - instance_boot_option: local\n - initial_state: manageable\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n discover_nodes:\n workflow: tripleo.baremetal.v1.discover_nodes\n input:\n ip_addresses: <% $.ip_addresses %>\n ports: <% $.ports %>\n credentials: <% $.credentials %>\n queue_name: <% $.queue_name %>\n on-success: enroll_nodes\n on-error: discover_nodes_failed\n publish:\n nodes_json: <% task().result.nodes_json %>\n\n discover_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(discover_nodes).result %>\n\n enroll_nodes:\n workflow: tripleo.baremetal.v1.register_or_update\n input:\n nodes_json: <% $.nodes_json %>\n kernel_name: <% $.kernel_name %>\n ramdisk_name: <% $.ramdisk_name %>\n instance_boot_option: <% $.instance_boot_option %>\n initial_state: <% $.initial_state %>\n on-success: send_message\n on-error: enroll_nodes_failed\n publish:\n registered_nodes: <% task().result.registered_nodes %>\n\n enroll_nodes_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(enroll_nodes).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.baremetal.v1.discover_and_enroll_nodes\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n registered_nodes: <% $.get('registered_nodes', []) %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.baremetal.v1", "tags": [], "created_at": "2018-06-26 05:45:21", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "72d3c295-550d-4657-9c60-84b5967ddaa5"} > >2018-06-26 11:15:21,224 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:21,226 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.storage.v1 >description: TripleO manages Ceph with ceph-ansible > >workflows: > ceph-install: > # allows for additional extra_vars via workflow input > input: > - ansible_playbook_verbosity: 0 > - ansible_skip_tags: 'package-install,with_pkg' > - ansible_env_variables: {} > - ansible_extra_env_variables: > ANSIBLE_CONFIG: /usr/share/ceph-ansible/ansible.cfg > ANSIBLE_ACTION_PLUGINS: /usr/share/ceph-ansible/plugins/actions/ > ANSIBLE_ROLES_PATH: /usr/share/ceph-ansible/roles/ > ANSIBLE_RETRY_FILES_ENABLED: 'False' > ANSIBLE_LOG_PATH: /var/log/mistral/ceph-install-workflow.log > ANSIBLE_LIBRARY: /usr/share/ceph-ansible/library/ > ANSIBLE_SSH_RETRIES: '3' > ANSIBLE_HOST_KEY_CHECKING: 'False' > DEFAULT_FORKS: '25' > - ceph_ansible_extra_vars: {} > - ceph_ansible_playbook: /usr/share/ceph-ansible/site-docker.yml.sample > - node_data_lookup: '{}' > tags: > - tripleo-common-managed > tasks: > collect_puppet_hieradata: > on-success: check_hieradata > publish: > hieradata: <% env().get('role_merged_configs', {}).values().select($.keys()).flatten().select(regex('^ceph::profile::params::osds$').search($)).where($ != null).toSet() %> > check_hieradata: > on-success: > - set_blacklisted_ips: <% not bool($.hieradata) %> > - fail(msg=<% 'Ceph deployment stopped, puppet-ceph hieradata found. Convert it into ceph-ansible variables. {0}'.format($.hieradata) %>): <% bool($.hieradata) %> > set_blacklisted_ips: > publish: > blacklisted_ips: <% env().get('blacklisted_ip_addresses', []) %> > on-success: set_ip_lists > set_ip_lists: > publish: > mgr_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_mgr_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > mon_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_mon_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > osd_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_osd_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > mds_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_mds_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > rgw_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_rgw_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > nfs_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_nfs_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > rbdmirror_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_rbdmirror_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > client_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_client_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > on-success: merge_ip_lists > merge_ip_lists: > publish: > ips_list: <% ($.mgr_ips + $.mon_ips + $.osd_ips + $.mds_ips + $.rgw_ips + $.nfs_ips + $.rbdmirror_ips + $.client_ips).toSet() %> > on-success: enable_ssh_admin > enable_ssh_admin: > workflow: tripleo.access.v1.enable_ssh_admin > input: > ssh_servers: <% $.ips_list %> > on-success: get_private_key > get_private_key: > action: tripleo.validations.get_privkey > publish: > private_key: <% task().result %> > on-success: make_fetch_directory > make_fetch_directory: > action: tripleo.files.make_temp_dir > publish: > fetch_directory: <% task().result.path %> > on-success: collect_nodes_uuid > collect_nodes_uuid: > action: tripleo.ansible-playbook > input: > inventory: > overcloud: > hosts: <% $.ips_list.toDict($, {}) %> > remote_user: tripleo-admin > become: true > become_user: root > verbosity: 0 > ssh_private_key: <% $.private_key %> > #NOTE(gfidente): set ANSIBLE_CALLBACK_WHITELIST to empty string to avoid spurious output > #in the json output. The publish: directive will in fact parse the output. > extra_env_variables: > ANSIBLE_CALLBACK_WHITELIST: '' > ANSIBLE_HOST_KEY_CHECKING: 'False' > ANSIBLE_STDOUT_CALLBACK: 'json' > playbook: > - hosts: overcloud > gather_facts: no > tasks: > - name: collect machine id > command: dmidecode -s system-uuid > publish: > ansible_output: <% json_parse(task().result.stderr) %> > on-success: set_ip_uuids > set_ip_uuids: > publish: > ip_uuids: <% let(root => $.ansible_output.get('plays')[0].get('tasks')[0].get('hosts')) -> $.ips_list.toDict($, $root.get($).get('stdout')) %> > on-success: parse_node_data_lookup > parse_node_data_lookup: > publish: > json_node_data_lookup: <% json_parse($.node_data_lookup) %> > on-success: map_node_data_lookup > map_node_data_lookup: > publish: > ips_data: <% let(uuids => $.ip_uuids, root => $) -> $.ips_list.toDict($, $root.json_node_data_lookup.get($uuids.get($, "NO-UUID-FOUND"), {})) %> > on-success: set_role_vars > set_role_vars: > publish: > # NOTE(gfidente): collect role settings from all tht roles > mgr_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_mgr_ansible_vars', {})).aggregate($1 + $2) %> > mon_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_mon_ansible_vars', {})).aggregate($1 + $2) %> > osd_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_osd_ansible_vars', {})).aggregate($1 + $2) %> > mds_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_mds_ansible_vars', {})).aggregate($1 + $2) %> > rgw_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_rgw_ansible_vars', {})).aggregate($1 + $2) %> > nfs_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_nfs_ansible_vars', {})).aggregate($1 + $2) %> > rbdmirror_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_rbdmirror_ansible_vars', {})).aggregate($1 + $2) %> > client_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_client_ansible_vars', {})).aggregate($1 + $2) %> > on-success: build_extra_vars > build_extra_vars: > publish: > # NOTE(gfidente): merge vars from all ansible roles > extra_vars: <% {'fetch_directory'=> $.fetch_directory} + $.mgr_vars + $.mon_vars + $.osd_vars + $.mds_vars + $.rgw_vars + $.nfs_vars + $.client_vars + $.rbdmirror_vars + $.ceph_ansible_extra_vars %> > on-success: ceph_install > ceph_install: > with-items: playbook in <% list($.ceph_ansible_playbook).flatten() %> > concurrency: 1 > action: tripleo.ansible-playbook > input: > inventory: > mgrs: > hosts: <% let(root => $) -> $.mgr_ips.toDict($, $root.ips_data.get($, {})) %> > mons: > hosts: <% let(root => $) -> $.mon_ips.toDict($, $root.ips_data.get($, {})) %> > osds: > hosts: <% let(root => $) -> $.osd_ips.toDict($, $root.ips_data.get($, {})) %> > mdss: > hosts: <% let(root => $) -> $.mds_ips.toDict($, $root.ips_data.get($, {})) %> > rgws: > hosts: <% let(root => $) -> $.rgw_ips.toDict($, $root.ips_data.get($, {})) %> > nfss: > hosts: <% let(root => $) -> $.nfs_ips.toDict($, $root.ips_data.get($, {})) %> > rbdmirrors: > hosts: <% let(root => $) -> $.rbdmirror_ips.toDict($, $root.ips_data.get($, {})) %> > clients: > hosts: <% let(root => $) -> $.client_ips.toDict($, $root.ips_data.get($, {})) %> > all: > vars: <% $.extra_vars %> > playbook: <% $.playbook %> > remote_user: tripleo-admin > become: true > become_user: root > verbosity: <% $.ansible_playbook_verbosity %> > ssh_private_key: <% $.private_key %> > skip_tags: <% $.ansible_skip_tags %> > extra_env_variables: <% $.ansible_extra_env_variables.mergeWith($.ansible_env_variables) %> > extra_vars: > ireallymeanit: 'yes' > publish: > output: <% task().result %> > on-complete: purge_fetch_directory > purge_fetch_directory: > action: tripleo.files.remove_temp_dir path=<% $.fetch_directory %> >' >2018-06-26 11:15:21,542 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 9123 >2018-06-26 11:15:21,543 DEBUG: RESP: [201] Content-Length: 9123 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:21 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.storage.v1\ndescription: TripleO manages Ceph with ceph-ansible\n\nworkflows:\n ceph-install:\n # allows for additional extra_vars via workflow input\n input:\n - ansible_playbook_verbosity: 0\n - ansible_skip_tags: 'package-install,with_pkg'\n - ansible_env_variables: {}\n - ansible_extra_env_variables:\n ANSIBLE_CONFIG: /usr/share/ceph-ansible/ansible.cfg\n ANSIBLE_ACTION_PLUGINS: /usr/share/ceph-ansible/plugins/actions/\n ANSIBLE_ROLES_PATH: /usr/share/ceph-ansible/roles/\n ANSIBLE_RETRY_FILES_ENABLED: 'False'\n ANSIBLE_LOG_PATH: /var/log/mistral/ceph-install-workflow.log\n ANSIBLE_LIBRARY: /usr/share/ceph-ansible/library/\n ANSIBLE_SSH_RETRIES: '3'\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n DEFAULT_FORKS: '25'\n - ceph_ansible_extra_vars: {}\n - ceph_ansible_playbook: /usr/share/ceph-ansible/site-docker.yml.sample\n - node_data_lookup: '{}'\n tags:\n - tripleo-common-managed\n tasks:\n collect_puppet_hieradata:\n on-success: check_hieradata\n publish:\n hieradata: <% env().get('role_merged_configs', {}).values().select($.keys()).flatten().select(regex('^ceph::profile::params::osds$').search($)).where($ != null).toSet() %>\n check_hieradata:\n on-success:\n - set_blacklisted_ips: <% not bool($.hieradata) %>\n - fail(msg=<% 'Ceph deployment stopped, puppet-ceph hieradata found. Convert it into ceph-ansible variables. {0}'.format($.hieradata) %>): <% bool($.hieradata) %>\n set_blacklisted_ips:\n publish:\n blacklisted_ips: <% env().get('blacklisted_ip_addresses', []) %>\n on-success: set_ip_lists\n set_ip_lists:\n publish:\n mgr_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_mgr_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n mon_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_mon_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n osd_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_osd_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n mds_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_mds_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n rgw_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_rgw_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n nfs_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_nfs_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n rbdmirror_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_rbdmirror_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n client_ips: <% let(root => $) -> env().get('service_ips', {}).get('ceph_client_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n on-success: merge_ip_lists\n merge_ip_lists:\n publish:\n ips_list: <% ($.mgr_ips + $.mon_ips + $.osd_ips + $.mds_ips + $.rgw_ips + $.nfs_ips + $.rbdmirror_ips + $.client_ips).toSet() %>\n on-success: enable_ssh_admin\n enable_ssh_admin:\n workflow: tripleo.access.v1.enable_ssh_admin\n input:\n ssh_servers: <% $.ips_list %>\n on-success: get_private_key\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: make_fetch_directory\n make_fetch_directory:\n action: tripleo.files.make_temp_dir\n publish:\n fetch_directory: <% task().result.path %>\n on-success: collect_nodes_uuid\n collect_nodes_uuid:\n action: tripleo.ansible-playbook\n input:\n inventory:\n overcloud:\n hosts: <% $.ips_list.toDict($, {}) %>\n remote_user: tripleo-admin\n become: true\n become_user: root\n verbosity: 0\n ssh_private_key: <% $.private_key %>\n #NOTE(gfidente): set ANSIBLE_CALLBACK_WHITELIST to empty string to avoid spurious output\n #in the json output. The publish: directive will in fact parse the output.\n extra_env_variables:\n ANSIBLE_CALLBACK_WHITELIST: ''\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n ANSIBLE_STDOUT_CALLBACK: 'json'\n playbook:\n - hosts: overcloud\n gather_facts: no\n tasks:\n - name: collect machine id\n command: dmidecode -s system-uuid\n publish:\n ansible_output: <% json_parse(task().result.stderr) %>\n on-success: set_ip_uuids\n set_ip_uuids:\n publish:\n ip_uuids: <% let(root => $.ansible_output.get('plays')[0].get('tasks')[0].get('hosts')) -> $.ips_list.toDict($, $root.get($).get('stdout')) %>\n on-success: parse_node_data_lookup\n parse_node_data_lookup:\n publish:\n json_node_data_lookup: <% json_parse($.node_data_lookup) %>\n on-success: map_node_data_lookup\n map_node_data_lookup:\n publish:\n ips_data: <% let(uuids => $.ip_uuids, root => $) -> $.ips_list.toDict($, $root.json_node_data_lookup.get($uuids.get($, \"NO-UUID-FOUND\"), {})) %>\n on-success: set_role_vars\n set_role_vars:\n publish:\n # NOTE(gfidente): collect role settings from all tht roles\n mgr_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_mgr_ansible_vars', {})).aggregate($1 + $2) %>\n mon_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_mon_ansible_vars', {})).aggregate($1 + $2) %>\n osd_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_osd_ansible_vars', {})).aggregate($1 + $2) %>\n mds_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_mds_ansible_vars', {})).aggregate($1 + $2) %>\n rgw_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_rgw_ansible_vars', {})).aggregate($1 + $2) %>\n nfs_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_nfs_ansible_vars', {})).aggregate($1 + $2) %>\n rbdmirror_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_rbdmirror_ansible_vars', {})).aggregate($1 + $2) %>\n client_vars: <% env().get('role_merged_configs', {}).values().select($.get('ceph_client_ansible_vars', {})).aggregate($1 + $2) %>\n on-success: build_extra_vars\n build_extra_vars:\n publish:\n # NOTE(gfidente): merge vars from all ansible roles\n extra_vars: <% {'fetch_directory'=> $.fetch_directory} + $.mgr_vars + $.mon_vars + $.osd_vars + $.mds_vars + $.rgw_vars + $.nfs_vars + $.client_vars + $.rbdmirror_vars + $.ceph_ansible_extra_vars %>\n on-success: ceph_install\n ceph_install:\n with-items: playbook in <% list($.ceph_ansible_playbook).flatten() %>\n concurrency: 1\n action: tripleo.ansible-playbook\n input:\n inventory:\n mgrs:\n hosts: <% let(root => $) -> $.mgr_ips.toDict($, $root.ips_data.get($, {})) %>\n mons:\n hosts: <% let(root => $) -> $.mon_ips.toDict($, $root.ips_data.get($, {})) %>\n osds:\n hosts: <% let(root => $) -> $.osd_ips.toDict($, $root.ips_data.get($, {})) %>\n mdss:\n hosts: <% let(root => $) -> $.mds_ips.toDict($, $root.ips_data.get($, {})) %>\n rgws:\n hosts: <% let(root => $) -> $.rgw_ips.toDict($, $root.ips_data.get($, {})) %>\n nfss:\n hosts: <% let(root => $) -> $.nfs_ips.toDict($, $root.ips_data.get($, {})) %>\n rbdmirrors:\n hosts: <% let(root => $) -> $.rbdmirror_ips.toDict($, $root.ips_data.get($, {})) %>\n clients:\n hosts: <% let(root => $) -> $.client_ips.toDict($, $root.ips_data.get($, {})) %>\n all:\n vars: <% $.extra_vars %>\n playbook: <% $.playbook %>\n remote_user: tripleo-admin\n become: true\n become_user: root\n verbosity: <% $.ansible_playbook_verbosity %>\n ssh_private_key: <% $.private_key %>\n skip_tags: <% $.ansible_skip_tags %>\n extra_env_variables: <% $.ansible_extra_env_variables.mergeWith($.ansible_env_variables) %>\n extra_vars:\n ireallymeanit: 'yes'\n publish:\n output: <% task().result %>\n on-complete: purge_fetch_directory\n purge_fetch_directory:\n action: tripleo.files.remove_temp_dir path=<% $.fetch_directory %>\n", "name": "tripleo.storage.v1", "tags": [], "created_at": "2018-06-26 05:45:21", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "4f526f45-f2d2-4b0f-ad1b-f5077b38054c"} > >2018-06-26 11:15:21,543 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:21,544 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.scale.v1 >description: TripleO Overcloud Deployment Workflows v1 > >workflows: > > delete_node: > description: deletes given overcloud nodes and updates the stack > > input: > - container > - nodes > - timeout: 240 > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > > delete_node: > action: tripleo.scale.delete_node nodes=<% $.nodes %> timeout=<% $.timeout %> container=<% $.container %> > on-success: wait_for_stack_in_progress > on-error: set_delete_node_failed > > set_delete_node_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(delete_node).result %> > > wait_for_stack_in_progress: > workflow: tripleo.stack.v1.wait_for_stack_in_progress stack=<% $.container %> > on-success: wait_for_stack_complete > on-error: wait_for_stack_in_progress_failed > > wait_for_stack_in_progress_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(wait_for_stack_in_progress).result %> > > wait_for_stack_complete: > workflow: tripleo.stack.v1.wait_for_stack_complete_or_failed stack=<% $.container %> > on-success: send_message > on-error: wait_for_stack_complete_failed > > wait_for_stack_complete_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(wait_for_stack_complete).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.scale.v1.delete_node > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 11:15:21,694 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 2258 >2018-06-26 11:15:21,694 DEBUG: RESP: [201] Content-Length: 2258 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:21 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.scale.v1\ndescription: TripleO Overcloud Deployment Workflows v1\n\nworkflows:\n\n delete_node:\n description: deletes given overcloud nodes and updates the stack\n\n input:\n - container\n - nodes\n - timeout: 240\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n delete_node:\n action: tripleo.scale.delete_node nodes=<% $.nodes %> timeout=<% $.timeout %> container=<% $.container %>\n on-success: wait_for_stack_in_progress\n on-error: set_delete_node_failed\n\n set_delete_node_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(delete_node).result %>\n\n wait_for_stack_in_progress:\n workflow: tripleo.stack.v1.wait_for_stack_in_progress stack=<% $.container %>\n on-success: wait_for_stack_complete\n on-error: wait_for_stack_in_progress_failed\n\n wait_for_stack_in_progress_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(wait_for_stack_in_progress).result %>\n\n wait_for_stack_complete:\n workflow: tripleo.stack.v1.wait_for_stack_complete_or_failed stack=<% $.container %>\n on-success: send_message\n on-error: wait_for_stack_complete_failed\n\n wait_for_stack_complete_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(wait_for_stack_complete).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.scale.v1.delete_node\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.scale.v1", "tags": [], "created_at": "2018-06-26 05:45:21", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "841d3ee6-9b5e-48b0-a71e-dd404992fb29"} > >2018-06-26 11:15:21,694 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:21,695 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.octavia_post.v1 >description: TripleO Octavia post deployment Workflows > >workflows: > > octavia_post_deploy: > description: Octavia post deployment > input: > - amp_image_name > - amp_image_filename > - amp_image_tag > - amp_ssh_key_name > - amp_ssh_key_path > - amp_ssh_key_data > - auth_username > - auth_password > - auth_project_name > - lb_mgmt_net_name > - lb_mgmt_subnet_name > - lb_sec_group_name > - lb_mgmt_subnet_cidr > - lb_mgmt_subnet_gateway > - lb_mgmt_subnet_pool_start > - lb_mgmt_subnet_pool_end > - generate_certs > - octavia_ansible_playbook > - overcloud_admin > - ca_cert_path > - ca_private_key_path > - ca_passphrase > - client_cert_path > - mgmt_port_dev > - overcloud_password > - overcloud_project > - overcloud_pub_auth_uri > - ansible_extra_env_variables: > ANSIBLE_HOST_KEY_CHECKING: 'False' > ANSIBLE_SSH_RETRIES: '3' > tags: > - tripleo-common-managed > tasks: > get_overcloud_stack_details: > publish: > # TODO(beagles), we are making an assumption about the octavia heatlh manager and > # controller worker needing > # > octavia_controller_ips: <% env().get('service_ips', {}).get('octavia_worker_ctlplane_node_ips', []) %> > on-success: enable_ssh_admin > > enable_ssh_admin: > workflow: tripleo.access.v1.enable_ssh_admin > input: > ssh_servers: <% $.octavia_controller_ips %> > on-success: get_private_key > > get_private_key: > action: tripleo.validations.get_privkey > publish: > private_key: <% task().result %> > on-success: make_local_temp_directory > > make_local_temp_directory: > action: tripleo.files.make_temp_dir > publish: > undercloud_local_dir: <% task().result.path %> > on-success: make_remote_temp_directory > > make_remote_temp_directory: > action: tripleo.files.make_temp_dir > publish: > undercloud_remote_dir: <% task().result.path %> > on-success: build_local_connection_environment_vars > > build_local_connection_environment_vars: > publish: > ansible_local_connection_variables: <% dict('ANSIBLE_REMOTE_TEMP' => $.undercloud_remote_dir, 'ANSIBLE_LOCAL_TEMP' => $.undercloud_local_dir) + $.ansible_extra_env_variables %> > on-success: upload_amphora > > upload_amphora: > action: tripleo.ansible-playbook > input: > inventory: > undercloud: > hosts: > localhost: > ansible_connection: local > > playbook: <% $.octavia_ansible_playbook %> > remote_user: stack > extra_env_variables: <% $.ansible_local_connection_variables %> > extra_vars: > os_password: <% $.overcloud_password %> > os_username: <% $.overcloud_admin %> > os_project_name: <% $.overcloud_project %> > os_auth_url: <% $.overcloud_pub_auth_uri %> > os_auth_type: "password" > os_identity_api_version: "3" > amp_image_name: <% $.amp_image_name %> > amp_image_filename: <% $.amp_image_filename %> > amp_image_tag: <% $.amp_image_tag %> > amp_ssh_key_name: <% $.amp_ssh_key_name %> > amp_ssh_key_path: <% $.amp_ssh_key_path %> > amp_ssh_key_data: <% $.amp_ssh_key_data %> > auth_username: <% $.auth_username %> > auth_password: <% $.auth_password %> > auth_project_name: <% $.auth_project_name %> > on-success: config_octavia > > config_octavia: > action: tripleo.ansible-playbook > input: > inventory: > octavia_nodes: > hosts: <% $.octavia_controller_ips.toDict($, {}) %> > verbosity: 0 > playbook: <% $.octavia_ansible_playbook %> > remote_user: tripleo-admin > become: true > become_user: root > ssh_private_key: <% $.private_key %> > ssh_common_args: '-o StrictHostKeyChecking=no' > ssh_extra_args: '-o UserKnownHostsFile=/dev/null' > extra_env_variables: <% $.ansible_extra_env_variables %> > extra_vars: > os_password: <% $.overcloud_password %> > os_username: <% $.overcloud_admin %> > os_project_name: <% $.overcloud_project %> > os_auth_url: <% $.overcloud_pub_auth_uri %> > os_auth_type: "password" > os_identity_api_version: "3" > amp_image_tag: <% $.amp_image_tag %> > lb_mgmt_net_name: <% $.lb_mgmt_net_name %> > lb_mgmt_subnet_name: <% $.lb_mgmt_subnet_name %> > lb_sec_group_name: <% $.lb_sec_group_name %> > lb_mgmt_subnet_cidr: <% $.lb_mgmt_subnet_cidr %> > lb_mgmt_subnet_gateway: <% $.lb_mgmt_subnet_gateway %> > lb_mgmt_subnet_pool_start: <% $.lb_mgmt_subnet_pool_start %> > lb_mgmt_subnet_pool_end: <% $.lb_mgmt_subnet_pool_end %> > ca_cert_path: <% $.ca_cert_path %> > ca_private_key_path: <% $.ca_private_key_path %> > ca_passphrase: <% $.ca_passphrase %> > client_cert_path: <% $.client_cert_path %> > generate_certs: <% $.generate_certs %> > mgmt_port_dev: <% $.mgmt_port_dev %> > auth_project_name: <% $.auth_project_name %> > on-complete: purge_local_temp_dir > purge_local_temp_dir: > action: tripleo.files.remove_temp_dir path=<% $.undercloud_local_dir %> > on-complete: purge_remote_temp_dir > purge_remote_temp_dir: > action: tripleo.files.remove_temp_dir path=<% $.undercloud_remote_dir %> > >' >2018-06-26 11:15:21,902 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 6113 >2018-06-26 11:15:21,903 DEBUG: RESP: [201] Content-Length: 6113 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:21 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.octavia_post.v1\ndescription: TripleO Octavia post deployment Workflows\n\nworkflows:\n\n octavia_post_deploy:\n description: Octavia post deployment\n input:\n - amp_image_name\n - amp_image_filename\n - amp_image_tag\n - amp_ssh_key_name\n - amp_ssh_key_path\n - amp_ssh_key_data\n - auth_username\n - auth_password\n - auth_project_name\n - lb_mgmt_net_name\n - lb_mgmt_subnet_name\n - lb_sec_group_name\n - lb_mgmt_subnet_cidr\n - lb_mgmt_subnet_gateway\n - lb_mgmt_subnet_pool_start\n - lb_mgmt_subnet_pool_end\n - generate_certs\n - octavia_ansible_playbook\n - overcloud_admin\n - ca_cert_path\n - ca_private_key_path\n - ca_passphrase\n - client_cert_path\n - mgmt_port_dev\n - overcloud_password\n - overcloud_project\n - overcloud_pub_auth_uri\n - ansible_extra_env_variables:\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n ANSIBLE_SSH_RETRIES: '3'\n tags:\n - tripleo-common-managed\n tasks:\n get_overcloud_stack_details:\n publish:\n # TODO(beagles), we are making an assumption about the octavia heatlh manager and\n # controller worker needing\n #\n octavia_controller_ips: <% env().get('service_ips', {}).get('octavia_worker_ctlplane_node_ips', []) %>\n on-success: enable_ssh_admin\n\n enable_ssh_admin:\n workflow: tripleo.access.v1.enable_ssh_admin\n input:\n ssh_servers: <% $.octavia_controller_ips %>\n on-success: get_private_key\n\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: make_local_temp_directory\n\n make_local_temp_directory:\n action: tripleo.files.make_temp_dir\n publish:\n undercloud_local_dir: <% task().result.path %>\n on-success: make_remote_temp_directory\n\n make_remote_temp_directory:\n action: tripleo.files.make_temp_dir\n publish:\n undercloud_remote_dir: <% task().result.path %>\n on-success: build_local_connection_environment_vars\n\n build_local_connection_environment_vars:\n publish:\n ansible_local_connection_variables: <% dict('ANSIBLE_REMOTE_TEMP' => $.undercloud_remote_dir, 'ANSIBLE_LOCAL_TEMP' => $.undercloud_local_dir) + $.ansible_extra_env_variables %>\n on-success: upload_amphora\n\n upload_amphora:\n action: tripleo.ansible-playbook\n input:\n inventory:\n undercloud:\n hosts:\n localhost:\n ansible_connection: local\n\n playbook: <% $.octavia_ansible_playbook %>\n remote_user: stack\n extra_env_variables: <% $.ansible_local_connection_variables %>\n extra_vars:\n os_password: <% $.overcloud_password %>\n os_username: <% $.overcloud_admin %>\n os_project_name: <% $.overcloud_project %>\n os_auth_url: <% $.overcloud_pub_auth_uri %>\n os_auth_type: \"password\"\n os_identity_api_version: \"3\"\n amp_image_name: <% $.amp_image_name %>\n amp_image_filename: <% $.amp_image_filename %>\n amp_image_tag: <% $.amp_image_tag %>\n amp_ssh_key_name: <% $.amp_ssh_key_name %>\n amp_ssh_key_path: <% $.amp_ssh_key_path %>\n amp_ssh_key_data: <% $.amp_ssh_key_data %>\n auth_username: <% $.auth_username %>\n auth_password: <% $.auth_password %>\n auth_project_name: <% $.auth_project_name %>\n on-success: config_octavia\n\n config_octavia:\n action: tripleo.ansible-playbook\n input:\n inventory:\n octavia_nodes:\n hosts: <% $.octavia_controller_ips.toDict($, {}) %>\n verbosity: 0\n playbook: <% $.octavia_ansible_playbook %>\n remote_user: tripleo-admin\n become: true\n become_user: root\n ssh_private_key: <% $.private_key %>\n ssh_common_args: '-o StrictHostKeyChecking=no'\n ssh_extra_args: '-o UserKnownHostsFile=/dev/null'\n extra_env_variables: <% $.ansible_extra_env_variables %>\n extra_vars:\n os_password: <% $.overcloud_password %>\n os_username: <% $.overcloud_admin %>\n os_project_name: <% $.overcloud_project %>\n os_auth_url: <% $.overcloud_pub_auth_uri %>\n os_auth_type: \"password\"\n os_identity_api_version: \"3\"\n amp_image_tag: <% $.amp_image_tag %>\n lb_mgmt_net_name: <% $.lb_mgmt_net_name %>\n lb_mgmt_subnet_name: <% $.lb_mgmt_subnet_name %>\n lb_sec_group_name: <% $.lb_sec_group_name %>\n lb_mgmt_subnet_cidr: <% $.lb_mgmt_subnet_cidr %>\n lb_mgmt_subnet_gateway: <% $.lb_mgmt_subnet_gateway %>\n lb_mgmt_subnet_pool_start: <% $.lb_mgmt_subnet_pool_start %>\n lb_mgmt_subnet_pool_end: <% $.lb_mgmt_subnet_pool_end %>\n ca_cert_path: <% $.ca_cert_path %>\n ca_private_key_path: <% $.ca_private_key_path %>\n ca_passphrase: <% $.ca_passphrase %>\n client_cert_path: <% $.client_cert_path %>\n generate_certs: <% $.generate_certs %>\n mgmt_port_dev: <% $.mgmt_port_dev %>\n auth_project_name: <% $.auth_project_name %>\n on-complete: purge_local_temp_dir\n purge_local_temp_dir:\n action: tripleo.files.remove_temp_dir path=<% $.undercloud_local_dir %>\n on-complete: purge_remote_temp_dir\n purge_remote_temp_dir:\n action: tripleo.files.remove_temp_dir path=<% $.undercloud_remote_dir %>\n\n", "name": "tripleo.octavia_post.v1", "tags": [], "created_at": "2018-06-26 05:45:21", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "89412ce7-709e-4003-b11a-f5a5d58272a8"} > >2018-06-26 11:15:21,903 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:21,904 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.fernet_keys.v1 >description: TripleO fernet key rotation workflows > >workflows: > > rotate_fernet_keys: > > input: > - container > - queue_name: tripleo > - ansible_extra_env_variables: > ANSIBLE_HOST_KEY_CHECKING: 'False' > > tags: > - tripleo-common-managed > > tasks: > > rotate_keys: > action: tripleo.parameters.rotate_fernet_keys container=<% $.container %> > on-success: deploy_ssh_key > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > deploy_ssh_key: > workflow: tripleo.validations.v1.copy_ssh_key > on-success: get_privkey > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > get_privkey: > action: tripleo.validations.get_privkey > on-success: deploy_keys > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > deploy_keys: > action: tripleo.ansible-playbook > input: > hosts: keystone > inventory: /usr/bin/tripleo-ansible-inventory > ssh_private_key: <% task(get_privkey).result %> > extra_env_variables: <% $.ansible_extra_env_variables + dict(TRIPLEO_PLAN_NAME=>$.container) %> > verbosity: 0 > remote_user: heat-admin > become: true > extra_vars: > fernet_keys: <% task(rotate_keys).result %> > use_openstack_credentials: true > playbook: /usr/share/tripleo-common/playbooks/rotate-keys.yaml > on-success: notify_zaqar > publish: > status: SUCCESS > message: <% task().result %> > on-error: notify_zaqar > publish-on-error: > status: FAILED > message: <% task().result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.fernet_keys.v1.rotate_fernet_keys > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 11:15:22,046 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 2609 >2018-06-26 11:15:22,047 DEBUG: RESP: [201] Content-Length: 2609 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:22 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.fernet_keys.v1\ndescription: TripleO fernet key rotation workflows\n\nworkflows:\n\n rotate_fernet_keys:\n\n input:\n - container\n - queue_name: tripleo\n - ansible_extra_env_variables:\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n\n tags:\n - tripleo-common-managed\n\n tasks:\n\n rotate_keys:\n action: tripleo.parameters.rotate_fernet_keys container=<% $.container %>\n on-success: deploy_ssh_key\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n deploy_ssh_key:\n workflow: tripleo.validations.v1.copy_ssh_key\n on-success: get_privkey\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n get_privkey:\n action: tripleo.validations.get_privkey\n on-success: deploy_keys\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n deploy_keys:\n action: tripleo.ansible-playbook\n input:\n hosts: keystone\n inventory: /usr/bin/tripleo-ansible-inventory\n ssh_private_key: <% task(get_privkey).result %>\n extra_env_variables: <% $.ansible_extra_env_variables + dict(TRIPLEO_PLAN_NAME=>$.container) %>\n verbosity: 0\n remote_user: heat-admin\n become: true\n extra_vars:\n fernet_keys: <% task(rotate_keys).result %>\n use_openstack_credentials: true\n playbook: /usr/share/tripleo-common/playbooks/rotate-keys.yaml\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-error: notify_zaqar\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.fernet_keys.v1.rotate_fernet_keys\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.fernet_keys.v1", "tags": [], "created_at": "2018-06-26 05:45:22", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "841d6882-9981-485e-93ad-59213867949e"} > >2018-06-26 11:15:22,047 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:22,048 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.swift_ring.v1 >description: Rebalance and distribute Swift rings using Ansible > > >workflows: > rebalance: > tags: > - tripleo-common-managed > > tasks: > get_private_key: > action: tripleo.validations.get_privkey > on-success: deploy_rings > > deploy_rings: > action: tripleo.ansible-playbook > publish: > output: <% task().result %> > input: > ssh_private_key: <% task(get_private_key).result %> > ssh_common_args: '-o StrictHostKeyChecking=no' > ssh_extra_args: '-o UserKnownHostsFile=/dev/null' > verbosity: 1 > remote_user: heat-admin > become: true > become_user: root > playbook: /usr/share/tripleo-common/playbooks/swift_ring_rebalance.yaml > inventory: /usr/bin/tripleo-ansible-inventory > use_openstack_credentials: true >' >2018-06-26 11:15:22,102 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 1140 >2018-06-26 11:15:22,103 DEBUG: RESP: [201] Content-Length: 1140 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:22 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.swift_ring.v1\ndescription: Rebalance and distribute Swift rings using Ansible\n\n\nworkflows:\n rebalance:\n tags:\n - tripleo-common-managed\n\n tasks:\n get_private_key:\n action: tripleo.validations.get_privkey\n on-success: deploy_rings\n\n deploy_rings:\n action: tripleo.ansible-playbook\n publish:\n output: <% task().result %>\n input:\n ssh_private_key: <% task(get_private_key).result %>\n ssh_common_args: '-o StrictHostKeyChecking=no'\n ssh_extra_args: '-o UserKnownHostsFile=/dev/null'\n verbosity: 1\n remote_user: heat-admin\n become: true\n become_user: root\n playbook: /usr/share/tripleo-common/playbooks/swift_ring_rebalance.yaml\n inventory: /usr/bin/tripleo-ansible-inventory\n use_openstack_credentials: true\n", "name": "tripleo.swift_ring.v1", "tags": [], "created_at": "2018-06-26 05:45:22", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "9dfae85e-49ae-4a27-a6f5-c2869e01158b"} > >2018-06-26 11:15:22,103 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:22,103 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.networks.v1 >description: TripleO Overcloud Networks Workflows v1 > >workflows: > > validate_networks_input: > description: > > Validate that required fields are present. > > input: > - networks > - queue_name: tripleo > > output: > result: <% task(validate_network_names).result %> > > tags: > - tripleo-common-managed > > tasks: > validate_network_names: > publish: > network_name_present: <% $.networks.all($.containsKey('name')) %> > on-success: > - set_status_success: <% $.network_name_present = true %> > - set_status_error: <% $.network_name_present = false %> > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > set_status_success: > on-success: notify_zaqar > publish: > status: SUCCESS > message: <% task(validate_network_names).result %> > > set_status_error: > on-success: notify_zaqar > publish: > status: FAILED > message: "One or more entries did not contain the required field 'name'" > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.networks.v1.validate_networks_input > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > update_networks: > description: > > Takes data in networks parameter in json format, validates its contents, > and persists them in network_data.yaml. After successful update, > templates are regenerated. > > input: > - container: overcloud > - networks > - network_data_file: 'network_data.yaml' > - queue_name: tripleo > > output: > network_data: <% $.network_data %> > > tags: > - tripleo-common-managed > > tasks: > validate_input: > description: > > validate the format of input (input includes required fields for > each network) > workflow: validate_networks_input > input: > networks: <% $.networks %> > on-success: validate_network_files > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > validate_network_files: > description: > > validate that Network names exist in Swift container > workflow: tripleo.plan_management.v1.validate_network_files > input: > container: <% $.container %> > network_data: <% $.networks %> > queue_name: <% $.queue_name %> > publish: > network_data: <% task().network_data %> > on-success: get_available_networks > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > get_available_networks: > workflow: tripleo.plan_management.v1.list_available_networks > input: > container: <% $.container %> > queue_name: <% $.queue_name %> > publish: > available_networks: <% task().result.available_networks %> > on-success: get_current_networks > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > get_current_networks: > workflow: tripleo.plan_management.v1.get_network_data > input: > container: <% $.container %> > network_data_file: <% $.network_data_file %> > queue_name: <% $.queue_name %> > publish: > current_networks: <% task().result.network_data %> > on-success: update_network_data > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > update_network_data: > description: > > Combine (or replace) the network data > action: tripleo.plan.update_networks > input: > networks: <% $.available_networks %> > current_networks: <% $.current_networks %> > remove_all: false > publish: > new_network_data: <% task().result.network_data %> > on-success: update_network_data_in_swift > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > update_network_data_in_swift: > description: > > update network_data.yaml object in Swift with data from workflow input > action: swift.put_object > input: > container: <% $.container %> > obj: <% $.network_data_file %> > contents: <% yaml_dump($.new_network_data) %> > on-success: regenerate_templates > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > regenerate_templates: > action: tripleo.templates.process container=<% $.container %> > on-success: get_networks > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > get_networks: > description: > > run GetNetworksAction to get updated contents of network_data.yaml and > provide it as output > workflow: tripleo.plan_management.v1.get_network_data > input: > container: <% $.container %> > network_data_file: <% $.network_data_file %> > queue_name: <% $.queue_name %> > publish: > network_data: <% task().network_data %> > on-success: set_status_success > publish-on-error: > status: FAILED > message: <% task().result %> > on-error: notify_zaqar > > set_status_success: > on-success: notify_zaqar > publish: > status: SUCCESS > message: <% task(get_networks).result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.networks.v1.update_networks > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 11:15:22,428 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 6800 >2018-06-26 11:15:22,429 DEBUG: RESP: [201] Content-Length: 6800 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:22 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.networks.v1\ndescription: TripleO Overcloud Networks Workflows v1\n\nworkflows:\n\n validate_networks_input:\n description: >\n Validate that required fields are present.\n\n input:\n - networks\n - queue_name: tripleo\n\n output:\n result: <% task(validate_network_names).result %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n validate_network_names:\n publish:\n network_name_present: <% $.networks.all($.containsKey('name')) %>\n on-success:\n - set_status_success: <% $.network_name_present = true %>\n - set_status_error: <% $.network_name_present = false %>\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(validate_network_names).result %>\n\n set_status_error:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: \"One or more entries did not contain the required field 'name'\"\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.networks.v1.validate_networks_input\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n update_networks:\n description: >\n Takes data in networks parameter in json format, validates its contents,\n and persists them in network_data.yaml. After successful update,\n templates are regenerated.\n\n input:\n - container: overcloud\n - networks\n - network_data_file: 'network_data.yaml'\n - queue_name: tripleo\n\n output:\n network_data: <% $.network_data %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n validate_input:\n description: >\n validate the format of input (input includes required fields for\n each network)\n workflow: validate_networks_input\n input:\n networks: <% $.networks %>\n on-success: validate_network_files\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n validate_network_files:\n description: >\n validate that Network names exist in Swift container\n workflow: tripleo.plan_management.v1.validate_network_files\n input:\n container: <% $.container %>\n network_data: <% $.networks %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().network_data %>\n on-success: get_available_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n get_available_networks:\n workflow: tripleo.plan_management.v1.list_available_networks\n input:\n container: <% $.container %>\n queue_name: <% $.queue_name %>\n publish:\n available_networks: <% task().result.available_networks %>\n on-success: get_current_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n get_current_networks:\n workflow: tripleo.plan_management.v1.get_network_data\n input:\n container: <% $.container %>\n network_data_file: <% $.network_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n current_networks: <% task().result.network_data %>\n on-success: update_network_data\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n update_network_data:\n description: >\n Combine (or replace) the network data\n action: tripleo.plan.update_networks\n input:\n networks: <% $.available_networks %>\n current_networks: <% $.current_networks %>\n remove_all: false\n publish:\n new_network_data: <% task().result.network_data %>\n on-success: update_network_data_in_swift\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n update_network_data_in_swift:\n description: >\n update network_data.yaml object in Swift with data from workflow input\n action: swift.put_object\n input:\n container: <% $.container %>\n obj: <% $.network_data_file %>\n contents: <% yaml_dump($.new_network_data) %>\n on-success: regenerate_templates\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n regenerate_templates:\n action: tripleo.templates.process container=<% $.container %>\n on-success: get_networks\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n get_networks:\n description: >\n run GetNetworksAction to get updated contents of network_data.yaml and\n provide it as output\n workflow: tripleo.plan_management.v1.get_network_data\n input:\n container: <% $.container %>\n network_data_file: <% $.network_data_file %>\n queue_name: <% $.queue_name %>\n publish:\n network_data: <% task().network_data %>\n on-success: set_status_success\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n on-error: notify_zaqar\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(get_networks).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.networks.v1.update_networks\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.networks.v1", "tags": [], "created_at": "2018-06-26 05:45:22", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "d3391a21-9fcf-4379-84e1-a2fadfa01b11"} > >2018-06-26 11:15:22,429 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:22,430 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.package_update.v1 >description: TripleO update workflows > >workflows: > > # Updates a workload cloud stack > package_update_plan: > description: Take a container and perform a package update with possible breakpoints > > input: > - container > - container_registry > - ceph_ansible_playbook > - timeout: 240 > - queue_name: tripleo > - skip_deploy_identifier: False > - config_dir: '/tmp/' > > tags: > - tripleo-common-managed > > tasks: > update: > action: tripleo.package_update.update_stack > input: > timeout: <% $.timeout %> > container: <% $.container %> > container_registry: <% $.container_registry %> > ceph_ansible_playbook: <% $.ceph_ansible_playbook %> > on-success: clean_plan > on-error: set_update_failed > > clean_plan: > action: tripleo.plan.update_plan_environment > input: > container: <% $.container %> > parameter: CephAnsiblePlaybook > env_key: parameter_defaults > delete: true > on-success: send_message > on-error: set_update_failed > > > set_update_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(update).result %> > > send_message: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.package_update.v1.package_update_plan > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > get_config: > input: > - container > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > get_config: > action: tripleo.config.get_overcloud_config container=<% $.container %> > publish: > status: SUCCESS > message: <% task().result %> > publish-on-error: > status: FAILED > message: Init Minor update failed > on-complete: send_message > > send_message: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.package_update.v1.package_update_plan > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > update_nodes: > description: Take a container and perform an update nodes by nodes > > input: > - node_user: heat-admin > - nodes > - playbook > - inventory_file > - ansible_queue_name: tripleo > - module_path: /usr/share/ansible-modules > - ansible_extra_env_variables: > ANSIBLE_LOG_PATH: /var/log/mistral/package_update.log > ANSIBLE_HOST_KEY_CHECKING: 'False' > - verbosity: 1 > - work_dir: /var/lib/mistral > - skip_tags: '' > > tags: > - tripleo-common-managed > > tasks: > download_config: > action: tripleo.config.download_config > input: > work_dir: <% $.work_dir %>/<% execution().id %> > on-success: get_private_key > on-error: node_update_failed > > get_private_key: > action: tripleo.validations.get_privkey > publish: > private_key: <% task().result %> > on-success: node_update > > node_update: > action: tripleo.ansible-playbook > input: > inventory: <% $.inventory_file %> > playbook: <% $.work_dir %>/<% execution().id %>/<% $.playbook %> > remote_user: <% $.node_user %> > become: true > become_user: root > verbosity: <% $.verbosity %> > ssh_private_key: <% $.private_key %> > extra_env_variables: <% $.ansible_extra_env_variables %> > limit_hosts: <% $.nodes %> > module_path: <% $.module_path %> > queue_name: <% $.ansible_queue_name %> > execution_id: <% execution().id %> > skip_tags: <% $.skip_tags %> > trash_output: true > on-success: > - node_update_passed: <% task().result.returncode = 0 %> > - node_update_failed: <% task().result.returncode != 0 %> > on-error: node_update_failed > publish: > output: <% task().result %> > > node_update_passed: > on-success: notify_zaqar > publish: > status: SUCCESS > message: Updated nodes - <% $.nodes %> > > node_update_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: Failed to update nodes - <% $.nodes %>, please see the logs. > > notify_zaqar: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.ansible_queue_name %> > messages: > body: > type: tripleo.package_update.v1.update_nodes > payload: > status: <% $.status %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > update_converge_plan: > description: Take a container and perform the converge for minor update > > input: > - container > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > remove_noop: > action: tripleo.plan.remove_noop_deploystep > input: > container: <% $.container %> > on-success: send_message > on-error: set_update_failed > > set_update_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(remove_noop).result %> > > send_message: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.package_update.v1.update_converge_plan > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > converge_upgrade_plan: > description: Take a container and perform the converge step of a major upgrade > > input: > - container > - timeout: 240 > - queue_name: tripleo > - skip_deploy_identifier: False > > tags: > - tripleo-common-managed > > tasks: > remove_noop: > action: tripleo.plan.remove_noop_deploystep > input: > container: <% $.container %> > on-success: send_message > on-error: set_update_failed > > set_update_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(upgrade_converge).result %> > > send_message: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.major_upgrade.v1.converge_upgrade_plan > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> > > ffwd_upgrade_converge_plan: > description: ffwd-upgrade converge removes DeploymentSteps no-op from plan > > input: > - container > - queue_name: tripleo > > tags: > - tripleo-common-managed > > tasks: > remove_noop: > action: tripleo.plan.remove_noop_deploystep > input: > container: <% $.container %> > on-success: send_message > on-error: set_update_failed > > set_update_failed: > on-success: send_message > publish: > status: FAILED > message: <% task(remove_noop).result %> > > send_message: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.package_update.v1.ffwd_upgrade_converge_plan > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 11:15:22,932 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 8946 >2018-06-26 11:15:22,933 DEBUG: RESP: [201] Content-Length: 8946 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:22 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.package_update.v1\ndescription: TripleO update workflows\n\nworkflows:\n\n # Updates a workload cloud stack\n package_update_plan:\n description: Take a container and perform a package update with possible breakpoints\n\n input:\n - container\n - container_registry\n - ceph_ansible_playbook\n - timeout: 240\n - queue_name: tripleo\n - skip_deploy_identifier: False\n - config_dir: '/tmp/'\n\n tags:\n - tripleo-common-managed\n\n tasks:\n update:\n action: tripleo.package_update.update_stack\n input:\n timeout: <% $.timeout %>\n container: <% $.container %>\n container_registry: <% $.container_registry %>\n ceph_ansible_playbook: <% $.ceph_ansible_playbook %>\n on-success: clean_plan\n on-error: set_update_failed\n\n clean_plan:\n action: tripleo.plan.update_plan_environment\n input:\n container: <% $.container %>\n parameter: CephAnsiblePlaybook\n env_key: parameter_defaults\n delete: true\n on-success: send_message\n on-error: set_update_failed\n\n\n set_update_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(update).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.package_update_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n get_config:\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_config:\n action: tripleo.config.get_overcloud_config container=<% $.container %>\n publish:\n status: SUCCESS\n message: <% task().result %>\n publish-on-error:\n status: FAILED\n message: Init Minor update failed\n on-complete: send_message\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.package_update_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n update_nodes:\n description: Take a container and perform an update nodes by nodes\n\n input:\n - node_user: heat-admin\n - nodes\n - playbook\n - inventory_file\n - ansible_queue_name: tripleo\n - module_path: /usr/share/ansible-modules\n - ansible_extra_env_variables:\n ANSIBLE_LOG_PATH: /var/log/mistral/package_update.log\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n - verbosity: 1\n - work_dir: /var/lib/mistral\n - skip_tags: ''\n\n tags:\n - tripleo-common-managed\n\n tasks:\n download_config:\n action: tripleo.config.download_config\n input:\n work_dir: <% $.work_dir %>/<% execution().id %>\n on-success: get_private_key\n on-error: node_update_failed\n\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: node_update\n\n node_update:\n action: tripleo.ansible-playbook\n input:\n inventory: <% $.inventory_file %>\n playbook: <% $.work_dir %>/<% execution().id %>/<% $.playbook %>\n remote_user: <% $.node_user %>\n become: true\n become_user: root\n verbosity: <% $.verbosity %>\n ssh_private_key: <% $.private_key %>\n extra_env_variables: <% $.ansible_extra_env_variables %>\n limit_hosts: <% $.nodes %>\n module_path: <% $.module_path %>\n queue_name: <% $.ansible_queue_name %>\n execution_id: <% execution().id %>\n skip_tags: <% $.skip_tags %>\n trash_output: true\n on-success:\n - node_update_passed: <% task().result.returncode = 0 %>\n - node_update_failed: <% task().result.returncode != 0 %>\n on-error: node_update_failed\n publish:\n output: <% task().result %>\n\n node_update_passed:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: Updated nodes - <% $.nodes %>\n\n node_update_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: Failed to update nodes - <% $.nodes %>, please see the logs.\n\n notify_zaqar:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.ansible_queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.update_nodes\n payload:\n status: <% $.status %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n update_converge_plan:\n description: Take a container and perform the converge for minor update\n\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n remove_noop:\n action: tripleo.plan.remove_noop_deploystep\n input:\n container: <% $.container %>\n on-success: send_message\n on-error: set_update_failed\n\n set_update_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(remove_noop).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.update_converge_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n converge_upgrade_plan:\n description: Take a container and perform the converge step of a major upgrade\n\n input:\n - container\n - timeout: 240\n - queue_name: tripleo\n - skip_deploy_identifier: False\n\n tags:\n - tripleo-common-managed\n\n tasks:\n remove_noop:\n action: tripleo.plan.remove_noop_deploystep\n input:\n container: <% $.container %>\n on-success: send_message\n on-error: set_update_failed\n\n set_update_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(upgrade_converge).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.major_upgrade.v1.converge_upgrade_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n\n ffwd_upgrade_converge_plan:\n description: ffwd-upgrade converge removes DeploymentSteps no-op from plan\n\n input:\n - container\n - queue_name: tripleo\n\n tags:\n - tripleo-common-managed\n\n tasks:\n remove_noop:\n action: tripleo.plan.remove_noop_deploystep\n input:\n container: <% $.container %>\n on-success: send_message\n on-error: set_update_failed\n\n set_update_failed:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(remove_noop).result %>\n\n send_message:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.package_update.v1.ffwd_upgrade_converge_plan\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.package_update.v1", "tags": [], "created_at": "2018-06-26 05:45:22", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "e939c1f0-9f9e-4344-a782-64d26a08a850"} > >2018-06-26 11:15:22,933 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:22,934 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.undercloud_backup.v1 >description: TripleO Undercloud backup workflows > >workflows: > > backup: > description: This workflow will launch the Undercloud backup > tags: > - tripleo-common-managed > input: > - sources_path: '/home/stack/' > - queue_name: tripleo > tasks: > # Action to know if there is enough available space > # to run the Undercloud backup > get_free_space: > action: tripleo.undercloud.get_free_space > publish: > status: SUCCESS > message: <% task().result %> > free_space: <% task().result %> > on-success: create_backup_dir > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > # We create a temp directory to store the Undercloud > # backup > create_backup_dir: > action: tripleo.undercloud.create_backup_dir > publish: > status: SUCCESS > message: <% task().result %> > backup_path: <% task().result %> > on-success: get_database_credentials > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > # The Undercloud database password for the root > # user is stored in a Mistral environment, we > # need the password in order to run the database dump > get_database_credentials: > action: mistral.environments_get name='tripleo.undercloud-config' > publish: > status: SUCCESS > message: <% task().result %> > undercloud_db_password: <% task(get_database_credentials).result.variables.undercloud_db_password %> > on-success: create_database_backup > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > # Run the DB dump of all the databases and store the result > # in the temporary folder > create_database_backup: > input: > path: <% $.backup_path.path %> > dbuser: root > dbpassword: <% $.undercloud_db_password %> > action: tripleo.undercloud.create_database_backup > publish: > status: SUCCESS > message: <% task().result %> > on-success: create_fs_backup > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > # This action will run the fs backup > create_fs_backup: > input: > sources_path: <% $.sources_path %> > path: <% $.backup_path.path %> > action: tripleo.undercloud.create_file_system_backup > publish: > status: SUCCESS > message: <% task().result %> > on-success: upload_backup > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > # This action will push the backup to swift > upload_backup: > input: > backup_path: <% $.backup_path.path %> > action: tripleo.undercloud.upload_backup_to_swift > publish: > status: SUCCESS > message: <% task().result %> > on-success: cleanup_backup > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > # This action will remove the backup temp folder > cleanup_backup: > input: > path: <% $.backup_path.path %> > action: tripleo.undercloud.remove_temp_dir > publish: > status: SUCCESS > message: <% task().result %> > on-success: send_message > on-error: send_message > publish-on-error: > status: FAILED > message: <% task().result %> > > # Sending a message to show that the backup finished > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.undercloud_backup.v1.launch > payload: > status: <% $.get('status', 'SUCCESS') %> > execution: <% execution() %> > message: <% $.get('message', '') %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 11:15:23,155 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 4669 >2018-06-26 11:15:23,156 DEBUG: RESP: [201] Content-Length: 4669 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:23 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.undercloud_backup.v1\ndescription: TripleO Undercloud backup workflows\n\nworkflows:\n\n backup:\n description: This workflow will launch the Undercloud backup\n tags:\n - tripleo-common-managed\n input:\n - sources_path: '/home/stack/'\n - queue_name: tripleo\n tasks:\n # Action to know if there is enough available space\n # to run the Undercloud backup\n get_free_space:\n action: tripleo.undercloud.get_free_space\n publish:\n status: SUCCESS\n message: <% task().result %>\n free_space: <% task().result %>\n on-success: create_backup_dir\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # We create a temp directory to store the Undercloud\n # backup\n create_backup_dir:\n action: tripleo.undercloud.create_backup_dir\n publish:\n status: SUCCESS\n message: <% task().result %>\n backup_path: <% task().result %>\n on-success: get_database_credentials\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # The Undercloud database password for the root\n # user is stored in a Mistral environment, we\n # need the password in order to run the database dump\n get_database_credentials:\n action: mistral.environments_get name='tripleo.undercloud-config'\n publish:\n status: SUCCESS\n message: <% task().result %>\n undercloud_db_password: <% task(get_database_credentials).result.variables.undercloud_db_password %>\n on-success: create_database_backup\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # Run the DB dump of all the databases and store the result\n # in the temporary folder\n create_database_backup:\n input:\n path: <% $.backup_path.path %>\n dbuser: root\n dbpassword: <% $.undercloud_db_password %>\n action: tripleo.undercloud.create_database_backup\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: create_fs_backup\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # This action will run the fs backup\n create_fs_backup:\n input:\n sources_path: <% $.sources_path %>\n path: <% $.backup_path.path %>\n action: tripleo.undercloud.create_file_system_backup\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: upload_backup\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # This action will push the backup to swift\n upload_backup:\n input:\n backup_path: <% $.backup_path.path %>\n action: tripleo.undercloud.upload_backup_to_swift\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: cleanup_backup\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # This action will remove the backup temp folder\n cleanup_backup:\n input:\n path: <% $.backup_path.path %>\n action: tripleo.undercloud.remove_temp_dir\n publish:\n status: SUCCESS\n message: <% task().result %>\n on-success: send_message\n on-error: send_message\n publish-on-error:\n status: FAILED\n message: <% task().result %>\n\n # Sending a message to show that the backup finished\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.undercloud_backup.v1.launch\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n execution: <% execution() %>\n message: <% $.get('message', '') %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.undercloud_backup.v1", "tags": [], "created_at": "2018-06-26 05:45:23", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "c6ef6156-6032-4482-ad29-870981e18422"} > >2018-06-26 11:15:23,156 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:23,157 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.skydive_ansible.v1 >description: TripleO manages Skydive with skydive-ansible > >workflows: > skydive_install: > # allows for additional extra_vars via workflow input > input: > - ansible_playbook_verbosity: 0 > - ansible_extra_env_variables: > ANSIBLE_ROLES_PATH: /usr/share/skydive-ansible/roles/ > ANSIBLE_RETRY_FILES_ENABLED: 'False' > ANSIBLE_LOG_PATH: /var/log/mistral/skydive-install-workflow.log > ANSIBLE_HOST_KEY_CHECKING: 'False' > - skydive_ansible_extra_vars: {} > - skydive_ansible_playbook: /usr/share/skydive-ansible/playbook.yml.sample > tags: > - tripleo-common-managed > tasks: > set_blacklisted_ips: > publish: > blacklisted_ips: <% env().get('blacklisted_ip_addresses', []) %> > on-success: set_ip_lists > set_ip_lists: > publish: > agent_ips: <% let(root => $) -> env().get('service_ips', {}).get('skydive_agent_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > analyzer_ips: <% let(root => $) -> env().get('service_ips', {}).get('skydive_analyzer_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %> > on-success: enable_ssh_admin > enable_ssh_admin: > workflow: tripleo.access.v1.enable_ssh_admin > input: > ssh_servers: <% ($.agent_ips + $.analyzer_ips).toSet() %> > on-success: get_private_key > get_private_key: > action: tripleo.validations.get_privkey > publish: > private_key: <% task().result %> > on-success: set_fork_count > set_fork_count: > publish: # unique list of all IPs: make each list a set, take unions and count > fork_count: <% min($.agent_ips.toSet().union($.analyzer_ips.toSet()).count(), 100) %> # don't use >100 forks > on-success: set_role_vars > set_role_vars: > publish: > # NOTE(sbaubeau): collect role settings from all tht roles > agent_vars: <% env().get('role_merged_configs', {}).values().select($.get('skydive_agent_ansible_vars', {})).aggregate($1 + $2) %> > analyzer_vars: <% env().get('role_merged_configs', {}).values().select($.get('skydive_analyzer_ansible_vars', {})).aggregate($1 + $2) %> > on-success: build_extra_vars > build_extra_vars: > publish: > # NOTE(sbaubeau): merge vars from all ansible roles > extra_vars: <% $.agent_vars + $.analyzer_vars + $.skydive_ansible_extra_vars %> > on-success: skydive_install > skydive_install: > action: tripleo.ansible-playbook > input: > inventory: > agents: > hosts: <% $.agent_ips.toDict($, {}) %> > analyzers: > hosts: <% $.analyzer_ips.toDict($, {}) %> > playbook: <% $.skydive_ansible_playbook %> > remote_user: tripleo-admin > become: true > become_user: root > verbosity: <% $.ansible_playbook_verbosity %> > forks: <% $.fork_count %> > ssh_private_key: <% $.private_key %> > extra_env_variables: <% $.ansible_extra_env_variables %> > extra_vars: <% $.extra_vars %> > publish: > output: <% task().result %> >' >2018-06-26 11:15:23,318 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 3507 >2018-06-26 11:15:23,319 DEBUG: RESP: [201] Content-Length: 3507 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:23 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.skydive_ansible.v1\ndescription: TripleO manages Skydive with skydive-ansible\n\nworkflows:\n skydive_install:\n # allows for additional extra_vars via workflow input\n input:\n - ansible_playbook_verbosity: 0\n - ansible_extra_env_variables:\n ANSIBLE_ROLES_PATH: /usr/share/skydive-ansible/roles/\n ANSIBLE_RETRY_FILES_ENABLED: 'False'\n ANSIBLE_LOG_PATH: /var/log/mistral/skydive-install-workflow.log\n ANSIBLE_HOST_KEY_CHECKING: 'False'\n - skydive_ansible_extra_vars: {}\n - skydive_ansible_playbook: /usr/share/skydive-ansible/playbook.yml.sample\n tags:\n - tripleo-common-managed\n tasks:\n set_blacklisted_ips:\n publish:\n blacklisted_ips: <% env().get('blacklisted_ip_addresses', []) %>\n on-success: set_ip_lists\n set_ip_lists:\n publish:\n agent_ips: <% let(root => $) -> env().get('service_ips', {}).get('skydive_agent_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n analyzer_ips: <% let(root => $) -> env().get('service_ips', {}).get('skydive_analyzer_ctlplane_node_ips', []).where(not ($ in $root.blacklisted_ips)) %>\n on-success: enable_ssh_admin\n enable_ssh_admin:\n workflow: tripleo.access.v1.enable_ssh_admin\n input:\n ssh_servers: <% ($.agent_ips + $.analyzer_ips).toSet() %>\n on-success: get_private_key\n get_private_key:\n action: tripleo.validations.get_privkey\n publish:\n private_key: <% task().result %>\n on-success: set_fork_count\n set_fork_count:\n publish: # unique list of all IPs: make each list a set, take unions and count\n fork_count: <% min($.agent_ips.toSet().union($.analyzer_ips.toSet()).count(), 100) %> # don't use >100 forks\n on-success: set_role_vars\n set_role_vars:\n publish:\n # NOTE(sbaubeau): collect role settings from all tht roles\n agent_vars: <% env().get('role_merged_configs', {}).values().select($.get('skydive_agent_ansible_vars', {})).aggregate($1 + $2) %>\n analyzer_vars: <% env().get('role_merged_configs', {}).values().select($.get('skydive_analyzer_ansible_vars', {})).aggregate($1 + $2) %>\n on-success: build_extra_vars\n build_extra_vars:\n publish:\n # NOTE(sbaubeau): merge vars from all ansible roles\n extra_vars: <% $.agent_vars + $.analyzer_vars + $.skydive_ansible_extra_vars %>\n on-success: skydive_install\n skydive_install:\n action: tripleo.ansible-playbook\n input:\n inventory:\n agents:\n hosts: <% $.agent_ips.toDict($, {}) %>\n analyzers:\n hosts: <% $.analyzer_ips.toDict($, {}) %>\n playbook: <% $.skydive_ansible_playbook %>\n remote_user: tripleo-admin\n become: true\n become_user: root\n verbosity: <% $.ansible_playbook_verbosity %>\n forks: <% $.fork_count %>\n ssh_private_key: <% $.private_key %>\n extra_env_variables: <% $.ansible_extra_env_variables %>\n extra_vars: <% $.extra_vars %>\n publish:\n output: <% task().result %>\n", "name": "tripleo.skydive_ansible.v1", "tags": [], "created_at": "2018-06-26 05:45:23", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "c31d8525-c544-444a-85bc-89d001c1db21"} > >2018-06-26 11:15:23,319 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:23,320 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.derive_params.v1 >description: TripleO Workflows to derive deployment parameters from the introspected data > >workflows: > > derive_parameters: > description: The main workflow for deriving parameters from the introspected data > > input: > - plan: overcloud > - queue_name: tripleo > - user_inputs: {} > > tags: > - tripleo-common-managed > > tasks: > get_flattened_parameters: > action: tripleo.parameters.get_flatten container=<% $.plan %> > publish: > environment_parameters: <% task().result.environment_parameters %> > heat_resource_tree: <% task().result.heat_resource_tree %> > on-success: > - get_roles: <% $.environment_parameters and $.heat_resource_tree %> > - set_status_failed_get_flattened_parameters: <% (not $.environment_parameters) or (not $.heat_resource_tree) %> > on-error: set_status_failed_get_flattened_parameters > > get_roles: > action: tripleo.role.list container=<% $.plan %> > publish: > role_name_list: <% task().result %> > on-success: > - get_valid_roles: <% $.role_name_list %> > - set_status_failed_get_roles: <% not $.role_name_list %> > on-error: set_status_failed_on_error_get_roles > > # Obtain only the roles which has count > 0, by checking <RoleName>Count parameter, like ComputeCount > get_valid_roles: > publish: > valid_role_name_list: <% let(hr => $.heat_resource_tree.parameters) -> $.role_name_list.where(int($hr.get(concat($, 'Count'), {}).get('default', 0)) > 0) %> > on-success: > - for_each_role: <% $.valid_role_name_list %> > - set_status_failed_get_valid_roles: <% not $.valid_role_name_list %> > > # Execute the basic preparation workflow for each role to get introspection data > for_each_role: > with-items: role_name in <% $.valid_role_name_list %> > concurrency: 1 > workflow: _derive_parameters_per_role > input: > plan: <% $.plan %> > role_name: <% $.role_name %> > environment_parameters: <% $.environment_parameters %> > heat_resource_tree: <% $.heat_resource_tree %> > user_inputs: <% $.user_inputs %> > publish: > # Gets all the roles derived parameters as dictionary > result: <% task().result.select($.get('derived_parameters', {})).sum() %> > on-success: reset_derive_parameters_in_plan > on-error: set_status_failed_for_each_role > > reset_derive_parameters_in_plan: > action: tripleo.parameters.reset > input: > container: <% $.plan %> > key: 'derived_parameters' > on-success: > # Add the derived parameters to the deployment plan only when $.result > # (the derived parameters) is non-empty. Otherwise, we're done. > - update_derive_parameters_in_plan: <% $.result %> > - send_message: <% not $.result %> > on-error: set_status_failed_reset_derive_parameters_in_plan > > update_derive_parameters_in_plan: > action: tripleo.parameters.update > input: > container: <% $.plan %> > key: 'derived_parameters' > parameters: <% $.get('result', {}) %> > on-success: send_message > on-error: set_status_failed_update_derive_parameters_in_plan > > set_status_failed_get_flattened_parameters: > on-success: send_message > publish: > status: FAILED > message: <% task(get_flattened_parameters).result %> > > set_status_failed_get_roles: > on-success: send_message > publish: > status: FAILED > message: "Unable to determine the list of roles in the deployment plan" > > set_status_failed_on_error_get_roles: > on-success: send_message > publish: > status: FAILED > message: <% task(get_roles).result %> > > set_status_failed_get_valid_roles: > on-success: send_message > publish: > status: FAILED > message: 'Unable to determine the list of valid roles in the deployment plan.' > > set_status_failed_for_each_role: > on-success: update_message_format > publish: > status: FAILED > # gets the status and message for all roles from task result. > message: <% task(for_each_role).result.select(dict('role_name' => $.role_name, 'status' => $.get('status', 'SUCCESS'), 'message' => $.get('message', ''))) %> > > update_message_format: > on-success: send_message > publish: > # updates the message format(Role 'role name': message) for each roles which are failed and joins the message list as string with ', ' separator. > message: <% $.message.where($.get('status', 'SUCCESS') != 'SUCCESS').select(concat("Role '{}':".format($.role_name), " ", $.get('message', '(error unknown)'))).join(', ') %> > > set_status_failed_reset_derive_parameters_in_plan: > on-success: send_message > publish: > status: FAILED > message: <% task(reset_derive_parameters_in_plan).result %> > > set_status_failed_update_derive_parameters_in_plan: > on-success: send_message > publish: > status: FAILED > message: <% task(update_derive_parameters_in_plan).result %> > > send_message: > action: zaqar.queue_post > retry: count=5 delay=1 > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.derive_params.v1.derive_parameters > payload: > status: <% $.get('status', 'SUCCESS') %> > message: <% $.get('message', '') %> > result: <% $.get('result', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = 'FAILED' %> > > > _derive_parameters_per_role: > description: > > Workflow which runs per role to get the introspection data on the first matching node assigned to role. > Once introspection data is fetched, this worklow will trigger the actual derive parameters workflow > input: > - plan > - role_name > - environment_parameters > - heat_resource_tree > - user_inputs > > output: > derived_parameters: <% $.get('derived_parameters', {}) %> > # Need role_name in output parameter to display the status for all roles in main workflow when any role fails here. > role_name: <% $.role_name %> > > tags: > - tripleo-common-managed > > tasks: > get_role_info: > workflow: _get_role_info > input: > role_name: <% $.role_name %> > heat_resource_tree: <% $.heat_resource_tree %> > publish: > role_features: <% task().result.get('role_features', []) %> > role_services: <% task().result.get('role_services', []) %> > on-success: > # Continue only if there are features associated with this role. Otherwise, we're done. > - get_flavor_name: <% $.role_features %> > on-error: set_status_failed_get_role_info > > # Getting introspection data workflow, which will take care of > # 1) profile and flavor based mapping > # 2) Nova placement api based mapping > # Currently we have implemented profile and flavor based mapping > # TODO-Nova placement api based mapping is pending, we will enchance it later. > get_flavor_name: > publish: > flavor_name: <% let(param_name => concat('Overcloud', $.role_name, 'Flavor').replace('OvercloudControllerFlavor', 'OvercloudControlFlavor')) -> $.heat_resource_tree.parameters.get($param_name, {}).get('default', '') %> > on-success: > - get_profile_name: <% $.flavor_name %> > - set_status_failed_get_flavor_name: <% not $.flavor_name %> > > get_profile_name: > action: tripleo.parameters.get_profile_of_flavor flavor_name=<% $.flavor_name %> > publish: > profile_name: <% task().result %> > on-success: get_profile_node > on-error: set_status_failed_get_profile_name > > get_profile_node: > workflow: tripleo.baremetal.v1.nodes_with_profile > input: > profile: <% $.profile_name %> > publish: > profile_node_uuid: <% task().result.matching_nodes.first('') %> > on-success: > - get_introspection_data: <% $.profile_node_uuid %> > - set_status_failed_no_matching_node_get_profile_node: <% not $.profile_node_uuid %> > on-error: set_status_failed_on_error_get_profile_node > > get_introspection_data: > action: baremetal_introspection.get_data uuid=<% $.profile_node_uuid %> > publish: > hw_data: <% task().result %> > # Establish an empty dictionary of derived_parameters prior to > # invoking the individual "feature" algorithms > derived_parameters: <% dict() %> > on-success: handle_dpdk_feature > on-error: set_status_failed_get_introspection_data > > handle_dpdk_feature: > on-success: > - get_dpdk_derive_params: <% $.role_features.contains('DPDK') %> > - handle_sriov_feature: <% not $.role_features.contains('DPDK') %> > > get_dpdk_derive_params: > workflow: tripleo.derive_params_formulas.v1.dpdk_derive_params > input: > plan: <% $.plan %> > role_name: <% $.role_name %> > hw_data: <% $.hw_data %> > user_inputs: <% $.user_inputs %> > publish: > derived_parameters: <% task().result.get('derived_parameters', {}) %> > on-success: handle_sriov_feature > on-error: set_status_failed_get_dpdk_derive_params > > handle_sriov_feature: > on-success: > - get_sriov_derive_params: <% $.role_features.contains('SRIOV') %> > - handle_host_feature: <% not $.role_features.contains('SRIOV') %> > > get_sriov_derive_params: > workflow: tripleo.derive_params_formulas.v1.sriov_derive_params > input: > role_name: <% $.role_name %> > hw_data: <% $.hw_data %> > derived_parameters: <% $.derived_parameters %> > publish: > derived_parameters: <% task().result.get('derived_parameters', {}) %> > on-success: handle_host_feature > on-error: set_status_failed_get_sriov_derive_params > > handle_host_feature: > on-success: > - get_host_derive_params: <% $.role_features.contains('HOST') %> > - handle_hci_feature: <% not $.role_features.contains('HOST') %> > > get_host_derive_params: > workflow: tripleo.derive_params_formulas.v1.host_derive_params > input: > role_name: <% $.role_name %> > hw_data: <% $.hw_data %> > user_inputs: <% $.user_inputs %> > derived_parameters: <% $.derived_parameters %> > publish: > derived_parameters: <% task().result.get('derived_parameters', {}) %> > on-success: handle_hci_feature > on-error: set_status_failed_get_host_derive_params > > handle_hci_feature: > on-success: > - get_hci_derive_params: <% $.role_features.contains('HCI') %> > > get_hci_derive_params: > workflow: tripleo.derive_params_formulas.v1.hci_derive_params > input: > role_name: <% $.role_name %> > environment_parameters: <% $.environment_parameters %> > heat_resource_tree: <% $.heat_resource_tree %> > introspection_data: <% $.hw_data %> > user_inputs: <% $.user_inputs %> > derived_parameters: <% $.derived_parameters %> > publish: > derived_parameters: <% task().result.get('derived_parameters', {}) %> > on-error: set_status_failed_get_hci_derive_params > # Done (no more derived parameter features) > > set_status_failed_get_role_info: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% task(get_role_info).result.get('message', '') %> > on-success: fail > > set_status_failed_get_flavor_name: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% "Unable to determine flavor for role '{0}'".format($.role_name) %> > on-success: fail > > set_status_failed_get_profile_name: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% task(get_profile_name).result %> > on-success: fail > > set_status_failed_no_matching_node_get_profile_node: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% "Unable to determine matching node for profile '{0}'".format($.profile_name) %> > on-success: fail > > set_status_failed_on_error_get_profile_node: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% task(get_profile_node).result %> > on-success: fail > > set_status_failed_get_introspection_data: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% task(get_introspection_data).result %> > on-success: fail > > set_status_failed_get_dpdk_derive_params: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% task(get_dpdk_derive_params).result %> > on-success: fail > > set_status_failed_get_sriov_derive_params: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% task(get_sriov_derive_params).result %> > on-success: fail > > set_status_failed_get_host_derive_params: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% task(get_host_derive_params).result %> > on-success: fail > > set_status_failed_get_hci_derive_params: > publish: > role_name: <% $.role_name %> > status: FAILED > message: <% task(get_hci_derive_params).result %> > on-success: fail > > > _get_role_info: > description: > > Workflow that determines the list of derived parameter features (DPDK, > HCI, etc.) for a role based on the services assigned to the role. > > input: > - role_name > - heat_resource_tree > > tags: > - tripleo-common-managed > > tasks: > get_resource_chains: > publish: > resource_chains: <% $.heat_resource_tree.resources.values().where($.get('type', '') = 'OS::Heat::ResourceChain') %> > on-success: > - get_role_chain: <% $.resource_chains %> > - set_status_failed_get_resource_chains: <% not $.resource_chains %> > > get_role_chain: > publish: > role_chain: <% let(chain_name => concat($.role_name, 'ServiceChain'))-> $.heat_resource_tree.resources.values().where($.name = $chain_name).first({}) %> > on-success: > - get_service_chain: <% $.role_chain %> > - set_status_failed_get_role_chain: <% not $.role_chain %> > > get_service_chain: > publish: > service_chain: <% let(resources => $.role_chain.resources)-> $.resource_chains.where($resources.contains($.id)).first('') %> > on-success: > - get_role_services: <% $.service_chain %> > - set_status_failed_get_service_chain: <% not $.service_chain %> > > get_role_services: > publish: > role_services: <% let(resources => $.heat_resource_tree.resources)-> $.service_chain.resources.select($resources.get($)) %> > on-success: > - check_features: <% $.role_services %> > - set_status_failed_get_role_services: <% not $.role_services %> > > check_features: > on-success: build_feature_dict > publish: > # The role supports the DPDK feature if the NeutronDatapathType parameter is present > dpdk: <% let(resources => $.heat_resource_tree.resources) -> $.role_services.any($.get('parameters', []).contains('NeutronDatapathType') or $.get('resources', []).select($resources.get($)).any($.get('parameters', []).contains('NeutronDatapathType'))) %> > > # The role supports the DPDK feature in ODL if the OvsEnableDpdk parameter value is true in role parameters. > odl_dpdk: <% let(role => $.role_name) -> $.heat_resource_tree.parameters.get(concat($role, 'Parameters'), {}).get('default', {}).get('OvsEnableDpdk', false) %> > > # The role supports the SRIOV feature if it includes NeutronSriovAgent services. > sriov: <% $.role_services.any($.get('type', '').endsWith('::NeutronSriovAgent')) %> > > # The role supports the HCI feature if it includes both NovaCompute and CephOSD services. > hci: <% $.role_services.any($.get('type', '').endsWith('::NovaCompute')) and $.role_services.any($.get('type', '').endsWith('::CephOSD')) %> > > build_feature_dict: > on-success: filter_features > publish: > feature_dict: <% dict(DPDK => ($.dpdk or $.odl_dpdk), SRIOV => $.sriov, HOST => ($.dpdk or $.odl_dpdk or $.sriov), HCI => $.hci) %> > > filter_features: > publish: > # The list of features that are enabled (i.e. are true in the feature_dict). > role_features: <% let(feature_dict => $.feature_dict)-> $feature_dict.keys().where($feature_dict[$]) %> > > set_status_failed_get_resource_chains: > publish: > message: <% 'Unable to locate any resource chains in the heat resource tree' %> > on-success: fail > > set_status_failed_get_role_chain: > publish: > message: <% "Unable to determine the service chain resource for role '{0}'".format($.role_name) %> > on-success: fail > > set_status_failed_get_service_chain: > publish: > message: <% "Unable to determine the service chain for role '{0}'".format($.role_name) %> > on-success: fail > > set_status_failed_get_role_services: > publish: > message: <% "Unable to determine list of services for role '{0}'".format($.role_name) %> > on-success: fail >' >2018-06-26 11:15:24,471 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 18571 >2018-06-26 11:15:24,511 DEBUG: RESP: [201] Content-Length: 18571 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:24 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.derive_params.v1\ndescription: TripleO Workflows to derive deployment parameters from the introspected data\n\nworkflows:\n\n derive_parameters:\n description: The main workflow for deriving parameters from the introspected data\n\n input:\n - plan: overcloud\n - queue_name: tripleo\n - user_inputs: {}\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_flattened_parameters:\n action: tripleo.parameters.get_flatten container=<% $.plan %>\n publish:\n environment_parameters: <% task().result.environment_parameters %>\n heat_resource_tree: <% task().result.heat_resource_tree %>\n on-success:\n - get_roles: <% $.environment_parameters and $.heat_resource_tree %>\n - set_status_failed_get_flattened_parameters: <% (not $.environment_parameters) or (not $.heat_resource_tree) %>\n on-error: set_status_failed_get_flattened_parameters\n\n get_roles:\n action: tripleo.role.list container=<% $.plan %>\n publish:\n role_name_list: <% task().result %>\n on-success:\n - get_valid_roles: <% $.role_name_list %>\n - set_status_failed_get_roles: <% not $.role_name_list %>\n on-error: set_status_failed_on_error_get_roles\n\n # Obtain only the roles which has count > 0, by checking <RoleName>Count parameter, like ComputeCount\n get_valid_roles:\n publish:\n valid_role_name_list: <% let(hr => $.heat_resource_tree.parameters) -> $.role_name_list.where(int($hr.get(concat($, 'Count'), {}).get('default', 0)) > 0) %>\n on-success:\n - for_each_role: <% $.valid_role_name_list %>\n - set_status_failed_get_valid_roles: <% not $.valid_role_name_list %>\n\n # Execute the basic preparation workflow for each role to get introspection data\n for_each_role:\n with-items: role_name in <% $.valid_role_name_list %>\n concurrency: 1\n workflow: _derive_parameters_per_role\n input:\n plan: <% $.plan %>\n role_name: <% $.role_name %>\n environment_parameters: <% $.environment_parameters %>\n heat_resource_tree: <% $.heat_resource_tree %>\n user_inputs: <% $.user_inputs %>\n publish:\n # Gets all the roles derived parameters as dictionary\n result: <% task().result.select($.get('derived_parameters', {})).sum() %>\n on-success: reset_derive_parameters_in_plan\n on-error: set_status_failed_for_each_role\n\n reset_derive_parameters_in_plan:\n action: tripleo.parameters.reset\n input:\n container: <% $.plan %>\n key: 'derived_parameters'\n on-success:\n # Add the derived parameters to the deployment plan only when $.result\n # (the derived parameters) is non-empty. Otherwise, we're done.\n - update_derive_parameters_in_plan: <% $.result %>\n - send_message: <% not $.result %>\n on-error: set_status_failed_reset_derive_parameters_in_plan\n\n update_derive_parameters_in_plan:\n action: tripleo.parameters.update\n input:\n container: <% $.plan %>\n key: 'derived_parameters'\n parameters: <% $.get('result', {}) %>\n on-success: send_message\n on-error: set_status_failed_update_derive_parameters_in_plan\n\n set_status_failed_get_flattened_parameters:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_flattened_parameters).result %>\n\n set_status_failed_get_roles:\n on-success: send_message\n publish:\n status: FAILED\n message: \"Unable to determine the list of roles in the deployment plan\"\n\n set_status_failed_on_error_get_roles:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(get_roles).result %>\n\n set_status_failed_get_valid_roles:\n on-success: send_message\n publish:\n status: FAILED\n message: 'Unable to determine the list of valid roles in the deployment plan.'\n\n set_status_failed_for_each_role:\n on-success: update_message_format\n publish:\n status: FAILED\n # gets the status and message for all roles from task result.\n message: <% task(for_each_role).result.select(dict('role_name' => $.role_name, 'status' => $.get('status', 'SUCCESS'), 'message' => $.get('message', ''))) %>\n\n update_message_format:\n on-success: send_message\n publish:\n # updates the message format(Role 'role name': message) for each roles which are failed and joins the message list as string with ', ' separator.\n message: <% $.message.where($.get('status', 'SUCCESS') != 'SUCCESS').select(concat(\"Role '{}':\".format($.role_name), \" \", $.get('message', '(error unknown)'))).join(', ') %>\n\n set_status_failed_reset_derive_parameters_in_plan:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(reset_derive_parameters_in_plan).result %>\n\n set_status_failed_update_derive_parameters_in_plan:\n on-success: send_message\n publish:\n status: FAILED\n message: <% task(update_derive_parameters_in_plan).result %>\n\n send_message:\n action: zaqar.queue_post\n retry: count=5 delay=1\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.derive_params.v1.derive_parameters\n payload:\n status: <% $.get('status', 'SUCCESS') %>\n message: <% $.get('message', '') %>\n result: <% $.get('result', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = 'FAILED' %>\n\n\n _derive_parameters_per_role:\n description: >\n Workflow which runs per role to get the introspection data on the first matching node assigned to role.\n Once introspection data is fetched, this worklow will trigger the actual derive parameters workflow\n input:\n - plan\n - role_name\n - environment_parameters\n - heat_resource_tree\n - user_inputs\n\n output:\n derived_parameters: <% $.get('derived_parameters', {}) %>\n # Need role_name in output parameter to display the status for all roles in main workflow when any role fails here.\n role_name: <% $.role_name %>\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_role_info:\n workflow: _get_role_info\n input:\n role_name: <% $.role_name %>\n heat_resource_tree: <% $.heat_resource_tree %>\n publish:\n role_features: <% task().result.get('role_features', []) %>\n role_services: <% task().result.get('role_services', []) %>\n on-success:\n # Continue only if there are features associated with this role. Otherwise, we're done.\n - get_flavor_name: <% $.role_features %>\n on-error: set_status_failed_get_role_info\n\n # Getting introspection data workflow, which will take care of\n # 1) profile and flavor based mapping\n # 2) Nova placement api based mapping\n # Currently we have implemented profile and flavor based mapping\n # TODO-Nova placement api based mapping is pending, we will enchance it later.\n get_flavor_name:\n publish:\n flavor_name: <% let(param_name => concat('Overcloud', $.role_name, 'Flavor').replace('OvercloudControllerFlavor', 'OvercloudControlFlavor')) -> $.heat_resource_tree.parameters.get($param_name, {}).get('default', '') %>\n on-success:\n - get_profile_name: <% $.flavor_name %>\n - set_status_failed_get_flavor_name: <% not $.flavor_name %>\n\n get_profile_name:\n action: tripleo.parameters.get_profile_of_flavor flavor_name=<% $.flavor_name %>\n publish:\n profile_name: <% task().result %>\n on-success: get_profile_node\n on-error: set_status_failed_get_profile_name\n\n get_profile_node:\n workflow: tripleo.baremetal.v1.nodes_with_profile\n input:\n profile: <% $.profile_name %>\n publish:\n profile_node_uuid: <% task().result.matching_nodes.first('') %>\n on-success:\n - get_introspection_data: <% $.profile_node_uuid %>\n - set_status_failed_no_matching_node_get_profile_node: <% not $.profile_node_uuid %>\n on-error: set_status_failed_on_error_get_profile_node\n\n get_introspection_data:\n action: baremetal_introspection.get_data uuid=<% $.profile_node_uuid %>\n publish:\n hw_data: <% task().result %>\n # Establish an empty dictionary of derived_parameters prior to\n # invoking the individual \"feature\" algorithms\n derived_parameters: <% dict() %>\n on-success: handle_dpdk_feature\n on-error: set_status_failed_get_introspection_data\n\n handle_dpdk_feature:\n on-success:\n - get_dpdk_derive_params: <% $.role_features.contains('DPDK') %>\n - handle_sriov_feature: <% not $.role_features.contains('DPDK') %>\n\n get_dpdk_derive_params:\n workflow: tripleo.derive_params_formulas.v1.dpdk_derive_params\n input:\n plan: <% $.plan %>\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n user_inputs: <% $.user_inputs %>\n publish:\n derived_parameters: <% task().result.get('derived_parameters', {}) %>\n on-success: handle_sriov_feature\n on-error: set_status_failed_get_dpdk_derive_params\n\n handle_sriov_feature:\n on-success:\n - get_sriov_derive_params: <% $.role_features.contains('SRIOV') %>\n - handle_host_feature: <% not $.role_features.contains('SRIOV') %>\n\n get_sriov_derive_params:\n workflow: tripleo.derive_params_formulas.v1.sriov_derive_params\n input:\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n derived_parameters: <% $.derived_parameters %>\n publish:\n derived_parameters: <% task().result.get('derived_parameters', {}) %>\n on-success: handle_host_feature\n on-error: set_status_failed_get_sriov_derive_params\n\n handle_host_feature:\n on-success:\n - get_host_derive_params: <% $.role_features.contains('HOST') %>\n - handle_hci_feature: <% not $.role_features.contains('HOST') %>\n\n get_host_derive_params:\n workflow: tripleo.derive_params_formulas.v1.host_derive_params\n input:\n role_name: <% $.role_name %>\n hw_data: <% $.hw_data %>\n user_inputs: <% $.user_inputs %>\n derived_parameters: <% $.derived_parameters %>\n publish:\n derived_parameters: <% task().result.get('derived_parameters', {}) %>\n on-success: handle_hci_feature\n on-error: set_status_failed_get_host_derive_params\n\n handle_hci_feature:\n on-success:\n - get_hci_derive_params: <% $.role_features.contains('HCI') %>\n\n get_hci_derive_params:\n workflow: tripleo.derive_params_formulas.v1.hci_derive_params\n input:\n role_name: <% $.role_name %>\n environment_parameters: <% $.environment_parameters %>\n heat_resource_tree: <% $.heat_resource_tree %>\n introspection_data: <% $.hw_data %>\n user_inputs: <% $.user_inputs %>\n derived_parameters: <% $.derived_parameters %>\n publish:\n derived_parameters: <% task().result.get('derived_parameters', {}) %>\n on-error: set_status_failed_get_hci_derive_params\n # Done (no more derived parameter features)\n\n set_status_failed_get_role_info:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_role_info).result.get('message', '') %>\n on-success: fail\n\n set_status_failed_get_flavor_name:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% \"Unable to determine flavor for role '{0}'\".format($.role_name) %>\n on-success: fail\n\n set_status_failed_get_profile_name:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_profile_name).result %>\n on-success: fail\n\n set_status_failed_no_matching_node_get_profile_node:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% \"Unable to determine matching node for profile '{0}'\".format($.profile_name) %>\n on-success: fail\n\n set_status_failed_on_error_get_profile_node:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_profile_node).result %>\n on-success: fail\n\n set_status_failed_get_introspection_data:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_introspection_data).result %>\n on-success: fail\n\n set_status_failed_get_dpdk_derive_params:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_dpdk_derive_params).result %>\n on-success: fail\n\n set_status_failed_get_sriov_derive_params:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_sriov_derive_params).result %>\n on-success: fail\n\n set_status_failed_get_host_derive_params:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_host_derive_params).result %>\n on-success: fail\n\n set_status_failed_get_hci_derive_params:\n publish:\n role_name: <% $.role_name %>\n status: FAILED\n message: <% task(get_hci_derive_params).result %>\n on-success: fail\n\n\n _get_role_info:\n description: >\n Workflow that determines the list of derived parameter features (DPDK,\n HCI, etc.) for a role based on the services assigned to the role.\n\n input:\n - role_name\n - heat_resource_tree\n\n tags:\n - tripleo-common-managed\n\n tasks:\n get_resource_chains:\n publish:\n resource_chains: <% $.heat_resource_tree.resources.values().where($.get('type', '') = 'OS::Heat::ResourceChain') %>\n on-success:\n - get_role_chain: <% $.resource_chains %>\n - set_status_failed_get_resource_chains: <% not $.resource_chains %>\n\n get_role_chain:\n publish:\n role_chain: <% let(chain_name => concat($.role_name, 'ServiceChain'))-> $.heat_resource_tree.resources.values().where($.name = $chain_name).first({}) %>\n on-success:\n - get_service_chain: <% $.role_chain %>\n - set_status_failed_get_role_chain: <% not $.role_chain %>\n\n get_service_chain:\n publish:\n service_chain: <% let(resources => $.role_chain.resources)-> $.resource_chains.where($resources.contains($.id)).first('') %>\n on-success:\n - get_role_services: <% $.service_chain %>\n - set_status_failed_get_service_chain: <% not $.service_chain %>\n\n get_role_services:\n publish:\n role_services: <% let(resources => $.heat_resource_tree.resources)-> $.service_chain.resources.select($resources.get($)) %>\n on-success:\n - check_features: <% $.role_services %>\n - set_status_failed_get_role_services: <% not $.role_services %>\n\n check_features:\n on-success: build_feature_dict\n publish:\n # The role supports the DPDK feature if the NeutronDatapathType parameter is present\n dpdk: <% let(resources => $.heat_resource_tree.resources) -> $.role_services.any($.get('parameters', []).contains('NeutronDatapathType') or $.get('resources', []).select($resources.get($)).any($.get('parameters', []).contains('NeutronDatapathType'))) %>\n\n # The role supports the DPDK feature in ODL if the OvsEnableDpdk parameter value is true in role parameters.\n odl_dpdk: <% let(role => $.role_name) -> $.heat_resource_tree.parameters.get(concat($role, 'Parameters'), {}).get('default', {}).get('OvsEnableDpdk', false) %>\n\n # The role supports the SRIOV feature if it includes NeutronSriovAgent services.\n sriov: <% $.role_services.any($.get('type', '').endsWith('::NeutronSriovAgent')) %>\n\n # The role supports the HCI feature if it includes both NovaCompute and CephOSD services.\n hci: <% $.role_services.any($.get('type', '').endsWith('::NovaCompute')) and $.role_services.any($.get('type', '').endsWith('::CephOSD')) %>\n\n build_feature_dict:\n on-success: filter_features\n publish:\n feature_dict: <% dict(DPDK => ($.dpdk or $.odl_dpdk), SRIOV => $.sriov, HOST => ($.dpdk or $.odl_dpdk or $.sriov), HCI => $.hci) %>\n\n filter_features:\n publish:\n # The list of features that are enabled (i.e. are true in the feature_dict).\n role_features: <% let(feature_dict => $.feature_dict)-> $feature_dict.keys().where($feature_dict[$]) %>\n\n set_status_failed_get_resource_chains:\n publish:\n message: <% 'Unable to locate any resource chains in the heat resource tree' %>\n on-success: fail\n\n set_status_failed_get_role_chain:\n publish:\n message: <% \"Unable to determine the service chain resource for role '{0}'\".format($.role_name) %>\n on-success: fail\n\n set_status_failed_get_service_chain:\n publish:\n message: <% \"Unable to determine the service chain for role '{0}'\".format($.role_name) %>\n on-success: fail\n\n set_status_failed_get_role_services:\n publish:\n message: <% \"Unable to determine list of services for role '{0}'\".format($.role_name) %>\n on-success: fail\n", "name": "tripleo.derive_params.v1", "tags": [], "created_at": "2018-06-26 05:45:24", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "13dd92fa-ae2a-45db-8a96-9111250eec22"} > >2018-06-26 11:15:24,511 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:24,512 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/workbooks -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: text/plain" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '--- >version: '2.0' >name: tripleo.swift_rings_backup.v1 >description: TripleO Swift Rings backup container Deployment Workflow v1 > >workflows: > > create_swift_rings_backup_container_plan: > description: > > This plan ensures existence of container for Swift Rings backup. > input: > - container > - queue_name: tripleo > tags: > - tripleo-common-managed > tasks: > > swift_rings_container: > publish: > swift_rings_container: "<% $.container %>-swift-rings" > swift_rings_tar: "swift-rings.tar.gz" > on-complete: check_container > > check_container: > action: swift.head_container container=<% $.swift_rings_container %> > on-success: get_tempurl > on-error: create_container > > create_container: > action: swift.put_container container=<% $.swift_rings_container %> > on-error: set_create_container_failed > on-success: get_tempurl > > get_tempurl: > action: tripleo.swift.tempurl > on-success: set_get_tempurl > input: > container: <% $.swift_rings_container %> > obj: <% $.swift_rings_tar %> > > set_get_tempurl: > action: tripleo.parameters.update > input: > parameters: > SwiftRingGetTempurl: <% task(get_tempurl).result %> > container: <% $.container %> > on-success: put_tempurl > > put_tempurl: > action: tripleo.swift.tempurl > on-success: set_put_tempurl > input: > container: <% $.swift_rings_container %> > obj: <% $.swift_rings_tar %> > method: "PUT" > > set_put_tempurl: > action: tripleo.parameters.update > input: > parameters: > SwiftRingPutTempurl: <% task(put_tempurl).result %> > container: <% $.container %> > on-success: set_status_success > on-error: set_put_tempurl_failed > > set_status_success: > on-success: notify_zaqar > publish: > status: SUCCESS > message: <% task(set_put_tempurl).result %> > > set_put_tempurl_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(set_put_tempurl).result %> > > set_create_container_failed: > on-success: notify_zaqar > publish: > status: FAILED > message: <% task(create_container).result %> > > notify_zaqar: > action: zaqar.queue_post > input: > queue_name: <% $.queue_name %> > messages: > body: > type: tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan > payload: > status: <% $.status %> > message: <% $.get('message', '') %> > execution: <% execution() %> > on-success: > - fail: <% $.get('status') = "FAILED" %> >' >2018-06-26 11:15:24,735 DEBUG: http://192.0.3.1:8989 "POST /v2/workbooks HTTP/1.1" 201 3154 >2018-06-26 11:15:24,735 DEBUG: RESP: [201] Content-Length: 3154 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:24 GMT Connection: keep-alive >RESP BODY: {"definition": "---\nversion: '2.0'\nname: tripleo.swift_rings_backup.v1\ndescription: TripleO Swift Rings backup container Deployment Workflow v1\n\nworkflows:\n\n create_swift_rings_backup_container_plan:\n description: >\n This plan ensures existence of container for Swift Rings backup.\n input:\n - container\n - queue_name: tripleo\n tags:\n - tripleo-common-managed\n tasks:\n\n swift_rings_container:\n publish:\n swift_rings_container: \"<% $.container %>-swift-rings\"\n swift_rings_tar: \"swift-rings.tar.gz\"\n on-complete: check_container\n\n check_container:\n action: swift.head_container container=<% $.swift_rings_container %>\n on-success: get_tempurl\n on-error: create_container\n\n create_container:\n action: swift.put_container container=<% $.swift_rings_container %>\n on-error: set_create_container_failed\n on-success: get_tempurl\n\n get_tempurl:\n action: tripleo.swift.tempurl\n on-success: set_get_tempurl\n input:\n container: <% $.swift_rings_container %>\n obj: <% $.swift_rings_tar %>\n\n set_get_tempurl:\n action: tripleo.parameters.update\n input:\n parameters:\n SwiftRingGetTempurl: <% task(get_tempurl).result %>\n container: <% $.container %>\n on-success: put_tempurl\n\n put_tempurl:\n action: tripleo.swift.tempurl\n on-success: set_put_tempurl\n input:\n container: <% $.swift_rings_container %>\n obj: <% $.swift_rings_tar %>\n method: \"PUT\"\n\n set_put_tempurl:\n action: tripleo.parameters.update\n input:\n parameters:\n SwiftRingPutTempurl: <% task(put_tempurl).result %>\n container: <% $.container %>\n on-success: set_status_success\n on-error: set_put_tempurl_failed\n\n set_status_success:\n on-success: notify_zaqar\n publish:\n status: SUCCESS\n message: <% task(set_put_tempurl).result %>\n\n set_put_tempurl_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(set_put_tempurl).result %>\n\n set_create_container_failed:\n on-success: notify_zaqar\n publish:\n status: FAILED\n message: <% task(create_container).result %>\n\n notify_zaqar:\n action: zaqar.queue_post\n input:\n queue_name: <% $.queue_name %>\n messages:\n body:\n type: tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan\n payload:\n status: <% $.status %>\n message: <% $.get('message', '') %>\n execution: <% execution() %>\n on-success:\n - fail: <% $.get('status') = \"FAILED\" %>\n", "name": "tripleo.swift_rings_backup.v1", "tags": [], "created_at": "2018-06-26 05:45:24", "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "ea6ef48c-7780-4d7c-bfba-3e3726914308"} > >2018-06-26 11:15:24,735 DEBUG: HTTP POST http://192.0.3.1:8989/v2/workbooks 201 >2018-06-26 11:15:24,736 INFO: Mistral workbooks configured successfully >2018-06-26 11:15:24,738 DEBUG: Starting new HTTP connection (1): 192.0.3.1 >2018-06-26 11:15:25,343 DEBUG: http://192.0.3.1:8080 "GET /v1/AUTH_13835fbb8e0947a9b3fa174b9a22cdb9?format=json HTTP/1.1" 200 196 >2018-06-26 11:15:25,344 DEBUG: REQ: curl -i http://192.0.3.1:8080/v1/AUTH_13835fbb8e0947a9b3fa174b9a22cdb9?format=json -X GET -H "Accept-Encoding: gzip" -H "X-Auth-Token: gAAAAABbMdLK-sNP..." >2018-06-26 11:15:25,344 DEBUG: RESP STATUS: 200 OK >2018-06-26 11:15:25,344 DEBUG: RESP HEADERS: {u'Content-Length': u'196', u'X-Account-Object-Count': u'959', u'x-account-project-domain-id': u'default', u'X-Openstack-Request-Id': u'txb5773f76a1714300a1318-005b31d2f4', u'X-Account-Storage-Policy-Policy-0-Bytes-Used': u'4025640', u'X-Account-Storage-Policy-Policy-0-Container-Count': u'2', u'X-Timestamp': u'1529987209.43296', u'X-Account-Storage-Policy-Policy-0-Object-Count': u'959', u'X-Trans-Id': u'txb5773f76a1714300a1318-005b31d2f4', u'Date': u'Tue, 26 Jun 2018 05:45:25 GMT', u'X-Account-Bytes-Used': u'4025640', u'X-Account-Container-Count': u'2', u'Content-Type': u'application/json; charset=utf-8', u'Accept-Ranges': u'bytes'} >2018-06-26 11:15:25,344 DEBUG: RESP BODY: [{"count": 0, "last_modified": "2018-06-26T04:27:22.536140", "bytes": 0, "name": "__cache__"}, {"count": 959, "last_modified": "2018-06-26T04:26:49.440730", "bytes": 4025640, "name": "overcloud"}] >2018-06-26 11:15:25,345 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:8989/v2/environments/tripleo.undercloud-config -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" >2018-06-26 11:15:25,356 DEBUG: http://192.0.3.1:8989 "GET /v2/environments/tripleo.undercloud-config HTTP/1.1" 200 411 >2018-06-26 11:15:25,356 DEBUG: RESP: [200] Content-Length: 411 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:25 GMT Connection: keep-alive >RESP BODY: {"created_at": "2018-06-26 04:26:48", "description": "Undercloud configuration parameters", "variables": "{\"undercloud_ceilometer_snmpd_password\": \"d8501c1a349fb1a4a0c122355ba3dacf0d9ad352\", \"undercloud_db_password\": \"password\"}", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "732729ed-52f8-4511-8f0d-411c70f2df3f", "name": "tripleo.undercloud-config"} > >2018-06-26 11:15:25,356 DEBUG: HTTP GET http://192.0.3.1:8989/v2/environments/tripleo.undercloud-config 200 >2018-06-26 11:15:25,357 DEBUG: REQ: curl -g -i -X PUT http://192.0.3.1:8989/v2/environments -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '{"variables": "{\"undercloud_ceilometer_snmpd_password\": \"d8501c1a349fb1a4a0c122355ba3dacf0d9ad352\", \"undercloud_db_password\": \"password\"}", "name": "tripleo.undercloud-config", "description": "Undercloud configuration parameters"}' >2018-06-26 11:15:25,365 DEBUG: http://192.0.3.1:8989 "PUT /v2/environments HTTP/1.1" 200 411 >2018-06-26 11:15:25,366 DEBUG: RESP: [200] Content-Length: 411 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:25 GMT Connection: keep-alive >RESP BODY: {"created_at": "2018-06-26 04:26:48", "description": "Undercloud configuration parameters", "variables": "{\"undercloud_ceilometer_snmpd_password\": \"d8501c1a349fb1a4a0c122355ba3dacf0d9ad352\", \"undercloud_db_password\": \"password\"}", "updated_at": null, "scope": "private", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "732729ed-52f8-4511-8f0d-411c70f2df3f", "name": "tripleo.undercloud-config"} > >2018-06-26 11:15:25,366 DEBUG: HTTP PUT http://192.0.3.1:8989/v2/environments 200 >2018-06-26 11:15:25,366 INFO: Not creating default plan "overcloud" because it already exists. >2018-06-26 11:15:25,366 INFO: Configuring an hourly cron trigger for tripleo-ui logging >2018-06-26 11:15:25,366 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/cron_triggers -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '{"pattern": "0 * * * *", "workflow_name": "tripleo.plan_management.v1.publish_ui_logs_to_swift", "first_execution_time": null, "name": "publish-ui-logs-hourly", "remaining_executions": null}' >2018-06-26 11:15:26,269 DEBUG: http://192.0.3.1:8989 "POST /v2/cron_triggers HTTP/1.1" 201 493 >2018-06-26 11:15:26,270 DEBUG: RESP: [201] Content-Length: 493 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:26 GMT Connection: keep-alive >RESP BODY: {"created_at": "2018-06-26 05:45:26", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "name": "publish-ui-logs-hourly", "pattern": "0 * * * *", "workflow_name": "tripleo.plan_management.v1.publish_ui_logs_to_swift", "workflow_input": "{}", "workflow_id": "187d5a01-2718-4392-a56f-5938ebdc46b6", "first_execution_time": null, "remaining_executions": null, "scope": "private", "workflow_params": "{}", "id": "e43b1fcd-5060-4aef-86a6-c6976f09b246", "next_execution_time": "2018-06-26 06:00:00"} > >2018-06-26 11:15:26,270 DEBUG: HTTP POST http://192.0.3.1:8989/v2/cron_triggers 201 >2018-06-26 11:15:26,271 DEBUG: REQ: curl -g -i -X POST http://192.0.3.1:8989/v2/executions -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "content-type: application/json" -H "X-Auth-Token: {SHA1}a74d7aca9cca1d76bbe83b789289f40e598942c8" -d '{"workflow_name": "tripleo.validations.v1.copy_ssh_key", "description": ""}' >2018-06-26 11:15:26,549 DEBUG: http://192.0.3.1:8989 "POST /v2/executions HTTP/1.1" 201 550 >2018-06-26 11:15:26,549 DEBUG: RESP: [201] Content-Length: 550 Content-Type: application/json Date: Tue, 26 Jun 2018 05:45:26 GMT Connection: keep-alive >RESP BODY: {"root_execution_id": null, "state_info": null, "description": "", "state": "RUNNING", "workflow_name": "tripleo.validations.v1.copy_ssh_key", "task_execution_id": null, "updated_at": "2018-06-26 05:45:26", "workflow_id": "0661bc51-2415-4700-9592-3a71c5ed1131", "params": "{\"namespace\": \"\"}", "workflow_namespace": "", "output": "{}", "input": "{\"overcloud_admin\": \"heat-admin\", \"queue_name\": \"tripleo\"}", "created_at": "2018-06-26 05:45:26", "project_id": "13835fbb8e0947a9b3fa174b9a22cdb9", "id": "8682bafd-1c48-421a-8cb3-9bf9aef86e18"} > >2018-06-26 11:15:26,550 DEBUG: HTTP POST http://192.0.3.1:8989/v2/executions 201 >2018-06-26 11:15:26,644 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:5000/ -H "Accept: application/json" -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" >2018-06-26 11:15:26,646 DEBUG: Starting new HTTP connection (1): 192.0.3.1 >2018-06-26 11:15:26,649 DEBUG: http://192.0.3.1:5000 "GET / HTTP/1.1" 300 593 >2018-06-26 11:15:26,650 DEBUG: RESP: [300] Date: Tue, 26 Jun 2018 05:45:26 GMT Server: Apache Vary: X-Auth-Token Content-Length: 593 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.0.3.1:5000/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.0.3.1:5000/v2.0/", "rel": "self"}, {"href": "https://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}} > >2018-06-26 11:15:26,651 DEBUG: Making authentication request to http://192.0.3.1:5000/v3/auth/tokens >2018-06-26 11:15:27,079 DEBUG: http://192.0.3.1:5000 "POST /v3/auth/tokens HTTP/1.1" 201 7993 >2018-06-26 11:15:27,080 DEBUG: {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "a19af673dce44d89bec07da60746e8e4", "name": "admin"}], "expires_at": "2018-06-26T09:45:27.000000Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "13835fbb8e0947a9b3fa174b9a22cdb9", "name": "admin"}, "catalog": [{"endpoints": [{"url": "http://192.0.3.1:5050", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "ab5c482d7d7a4a2dbe585fd722a6ca73"}, {"url": "http://192.0.3.1:5050", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "bb4e26d4adcd460eb44821e899be9ebb"}, {"url": "http://192.0.3.1:5050", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "dcf6a9debd8f4934aa384251e7613cb5"}], "type": "baremetal-introspection", "id": "084902dec7484ca0b731c2f39c33ab52", "name": "ironic-inspector"}, {"endpoints": [{"url": "ws://192.0.3.1:9000", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "418298d93a3544ddb99bd2015af10e45"}, {"url": "ws://192.0.3.1:9000", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "4413828ebe134d8bbad9babe9f81e7c5"}, {"url": "ws://192.0.3.1:9000", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "81fac1a734154da88c398e772f6e7cb3"}], "type": "messaging-websocket", "id": "0a6a1173fb884a5a82322e44a1fc0eea", "name": "zaqar-websocket"}, {"endpoints": [{"url": "http://192.0.3.1:8004/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "4a1d37b9994a45d4a6b041013673c2e9"}, {"url": "http://192.0.3.1:8004/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "8485f45bf105494a81c4d8ffcdbffc7d"}, {"url": "http://192.0.3.1:8004/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "fe9568bd34c94bba8d04dad0fda5435e"}], "type": "orchestration", "id": "115d8bc598754862b67fc9b7c3dcabc1", "name": "heat"}, {"endpoints": [{"url": "http://192.0.3.1:8080/v1/AUTH_13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "50904c3c2052433ca4e85e1f870a96ee"}, {"url": "http://192.0.3.1:8080/v1/AUTH_13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "826f9ad5da574268a3a9864df3423b8d"}, {"url": "http://192.0.3.1:8080", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "9bcb806ddd8f45c381a39fcb1612ef0a"}], "type": "object-store", "id": "158a9ec0b8e8442a91d539c94f7f3e0d", "name": "swift"}, {"endpoints": [{"url": "http://192.0.3.1:9696", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "8f27927fd8ea4ce29ff057a4f87484c6"}, {"url": "http://192.0.3.1:9696", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "e2f7d421188c484c8560cfc98ba36498"}, {"url": "http://192.0.3.1:9696", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "ef58d0445d78427c991ddf1935bdecca"}], "type": "network", "id": "4413143a83434a35aacc03625951c5e6", "name": "neutron"}, {"endpoints": [{"url": "http://192.0.3.1:8989/v2", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "60120820741f409a86c4fc04675e87f5"}, {"url": "http://192.0.3.1:8989/v2", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "7f57a70539474749a8732e237cd3d047"}, {"url": "http://192.0.3.1:8989/v2", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "838632e4dad7499683622be1425ae9f9"}], "type": "workflowv2", "id": "4fd514dc06964316ac0a0ce00ec69ac3", "name": "mistral"}, {"endpoints": [{"url": "http://192.0.3.1:8000/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "29f6d67693b2422da3797af84fa584d0"}, {"url": "http://192.0.3.1:8000/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "9d974513a36f4a1cb4c1a909492870f2"}, {"url": "http://192.0.3.1:8000/v1/13835fbb8e0947a9b3fa174b9a22cdb9", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "fbb25e17c719472eb5d34cad0238d098"}], "type": "cloudformation", "id": "56cff4af5f114405a3c2f0fc77a22eb3", "name": "heat-cfn"}, {"endpoints": [{"url": "http://192.0.3.1:8888", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "5e779a349b1742aabeebb6722260c17d"}, {"url": "http://192.0.3.1:8888", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "87f59b4dfb0445bca44bf310b77be097"}, {"url": "http://192.0.3.1:8888", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "980bf5c9b80b4111b5ba19dcc5274866"}], "type": "messaging", "id": "6051d4397a684f3daf43f2ec39727c26", "name": "zaqar"}, {"endpoints": [{"url": "http://192.0.3.1:8774/v2.1", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "217c1916df124498a130051b0d2929b3"}, {"url": "http://192.0.3.1:8774/v2.1", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "6e0f74f28b824f979fb5f5cc30bd3c3f"}, {"url": "http://192.0.3.1:8774/v2.1", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "ef43d40f16b24c758abce9b806f3ab04"}], "type": "compute", "id": "6670f1f004934179b4e2d17ac8ac4559", "name": "nova"}, {"endpoints": [{"url": "http://192.0.3.1:9292", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "61c209b4b8f644d191bae26716309f26"}, {"url": "http://192.0.3.1:9292", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "9447a8abbe6b4a6b86bb0299666ba978"}, {"url": "http://192.0.3.1:9292", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "dd5cb9ddfe5e496a9ae10f8dc30e3596"}], "type": "image", "id": "8d4ca6bed6b14c2e9ef1634a7f86a1bf", "name": "glance"}, {"endpoints": [{"url": "http://192.0.3.1:6385", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "68862b76576e4797ae9b44e7e920a69d"}, {"url": "http://192.0.3.1:6385", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "9b6360b588564179a2ced0f5fd842e36"}, {"url": "http://192.0.3.1:6385", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "ba8e82ab1d98411f853796bbb04778d4"}], "type": "baremetal", "id": "9f9e76a976564a1e8f0941929009e0ab", "name": "ironic"}, {"endpoints": [{"url": "http://192.0.3.1:8778/placement", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "00bb90f687b4403c8d2d4e5015504ae4"}, {"url": "http://192.0.3.1:8778/placement", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "227bf279774b40a8b6391b570de22a80"}, {"url": "http://192.0.3.1:8778/placement", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "ceaf819496d74a0496c09c9b7c9c0cd4"}], "type": "placement", "id": "ac1c0292ca3a42a1ad0ca09c9a2f2db5", "name": "placement"}, {"endpoints": [{"url": "http://192.0.3.1:5000", "interface": "public", "region": "regionOne", "region_id": "regionOne", "id": "0716550d71d94a76bb684b55a29bda59"}, {"url": "http://192.0.3.1:35357", "interface": "admin", "region": "regionOne", "region_id": "regionOne", "id": "1d6b1d8c41204fe7a2099501c32b0288"}, {"url": "http://192.0.3.1:5000", "interface": "internal", "region": "regionOne", "region_id": "regionOne", "id": "e375868d7ee04e089d76ac8e49a498e3"}], "type": "identity", "id": "ce6de0f0b70b4955921edafe97432e27", "name": "keystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "6e71dffd643e4c24a0efff2673fdac32"}, "audit_ids": ["mucDEUwRQfKpaiVtSf_ZtQ"], "issued_at": "2018-06-26T05:45:27.000000Z"}} >2018-06-26 11:15:27,080 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:5000/ -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}068e020e3e02f80bbc94c1c1c1f9a74cc6ba367f" >2018-06-26 11:15:27,083 DEBUG: http://192.0.3.1:5000 "GET / HTTP/1.1" 300 593 >2018-06-26 11:15:27,084 DEBUG: RESP: [300] Date: Tue, 26 Jun 2018 05:45:27 GMT Server: Apache Vary: X-Auth-Token Content-Length: 593 Keep-Alive: timeout=15, max=98 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.0.3.1:5000/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.0.3.1:5000/v2.0/", "rel": "self"}, {"href": "https://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}} > >2018-06-26 11:15:27,086 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:35357 -H "Accept: application/json" -H "User-Agent: -c keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" >2018-06-26 11:15:27,087 DEBUG: Starting new HTTP connection (1): 192.0.3.1 >2018-06-26 11:15:27,090 DEBUG: http://192.0.3.1:35357 "GET / HTTP/1.1" 300 595 >2018-06-26 11:15:27,091 DEBUG: RESP: [300] Date: Tue, 26 Jun 2018 05:45:27 GMT Server: Apache Vary: X-Auth-Token Content-Length: 595 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.0.3.1:35357/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.0.3.1:35357/v2.0/", "rel": "self"}, {"href": "https://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}} > >2018-06-26 11:15:27,091 DEBUG: REQ: curl -g -i -X GET http://192.0.3.1:35357/v3/roles? -H "User-Agent: python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}068e020e3e02f80bbc94c1c1c1f9a74cc6ba367f" >2018-06-26 11:15:27,166 DEBUG: http://192.0.3.1:35357 "GET /v3/roles HTTP/1.1" 200 285 >2018-06-26 11:15:27,167 DEBUG: RESP: [200] Date: Tue, 26 Jun 2018 05:45:27 GMT Server: Apache Vary: X-Auth-Token,Accept-Encoding x-openstack-request-id: req-5c2d2f6a-f02c-4e4d-b099-e3d4f104429a Content-Encoding: gzip Content-Length: 285 Keep-Alive: timeout=15, max=99 Connection: Keep-Alive Content-Type: application/json >RESP BODY: {"links": {"self": "http://192.0.3.1:5000/v3/roles", "previous": null, "next": null}, "roles": [{"domain_id": null, "id": "71c36dfad40d41359611a7dec98fc268", "links": {"self": "http://192.0.3.1:5000/v3/roles/71c36dfad40d41359611a7dec98fc268"}, "name": "swiftoperator"}, {"domain_id": null, "id": "9817757d3dd94d7e90059b16802adb87", "links": {"self": "http://192.0.3.1:5000/v3/roles/9817757d3dd94d7e90059b16802adb87"}, "name": "heat_stack_user"}, {"domain_id": null, "id": "a19af673dce44d89bec07da60746e8e4", "links": {"self": "http://192.0.3.1:5000/v3/roles/a19af673dce44d89bec07da60746e8e4"}, "name": "admin"}, {"domain_id": null, "id": "a1c3b65795594c068897d86e9479642c", "links": {"self": "http://192.0.3.1:5000/v3/roles/a1c3b65795594c068897d86e9479642c"}, "name": "ResellerAdmin"}]} > >2018-06-26 11:15:27,167 DEBUG: GET call to identity for http://192.0.3.1:35357/v3/roles used request id req-5c2d2f6a-f02c-4e4d-b099-e3d4f104429a >2018-06-26 11:15:27,184 INFO: >############################################################################# >Undercloud install complete. > >The file containing this installation's passwords is at >/home/sudheer/undercloud-passwords.conf. > >There is also a stackrc file at /home/sudheer/stackrc. > >These files are needed to interact with the OpenStack services, and should be >secured. > >############################################################################# >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1597100
: 1455969