Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1459086 Details for
Bug 1601382
Unreachable overcloud nodes during "run nodes-uuid" task
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ansible.log
ansible.log (text/plain), 2.16 MB, created by
Filip Hubík
on 2018-07-16 08:59:19 UTC
(
hide
)
Description:
/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ansible.log
Filename:
MIME Type:
Creator:
Filip Hubík
Created:
2018-07-16 08:59:19 UTC
Size:
2.16 MB
patch
obsolete
>2018-07-13 20:46:36,158 p=5867 u=mistral | Using /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ansible.cfg as config file >2018-07-13 20:46:36,206 p=5867 u=mistral | [WARNING]: Could not match supplied host pattern, ignoring: > >2018-07-13 20:46:36,873 p=5867 u=mistral | PLAY [Gather facts from undercloud] ******************************************** >2018-07-13 20:46:36,885 p=5867 u=mistral | TASK [Gathering Facts] ********************************************************* >2018-07-13 20:46:36,885 p=5867 u=mistral | Friday 13 July 2018 20:46:36 -0400 (0:00:00.073) 0:00:00.073 *********** >2018-07-13 20:46:37,657 p=5867 u=mistral | ok: [undercloud] >2018-07-13 20:46:37,685 p=5867 u=mistral | PLAY [Gather facts from overcloud] ********************************************* >2018-07-13 20:46:37,693 p=5867 u=mistral | TASK [Gathering Facts] ********************************************************* >2018-07-13 20:46:37,693 p=5867 u=mistral | Friday 13 July 2018 20:46:37 -0400 (0:00:00.807) 0:00:00.881 *********** >2018-07-13 20:46:40,827 p=5867 u=mistral | ok: [compute-0] >2018-07-13 20:46:40,859 p=5867 u=mistral | ok: [ceph-0] >2018-07-13 20:46:40,934 p=5867 u=mistral | ok: [controller-0] >2018-07-13 20:46:40,959 p=5867 u=mistral | PLAY [Load global variables] *************************************************** >2018-07-13 20:46:40,980 p=5867 u=mistral | TASK [include_vars] ************************************************************ >2018-07-13 20:46:40,980 p=5867 u=mistral | Friday 13 July 2018 20:46:40 -0400 (0:00:03.286) 0:00:04.168 *********** >2018-07-13 20:46:41,040 p=5867 u=mistral | ok: [compute-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.21,ceph-0.localdomain,ceph-0,172.17.3.21,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.19,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.12,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.12,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.12,ceph-0.external.localdomain,ceph-0.external,192.168.24.12,ceph-0.management.localdomain,ceph-0.management,192.168.24.12,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.12,compute-0.localdomain,compute-0,172.17.3.17,compute-0.storage.localdomain,compute-0.storage,192.168.24.8,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.12,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.19,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.8,compute-0.external.localdomain,compute-0.external,192.168.24.8,compute-0.management.localdomain,compute-0.management,192.168.24.8,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.19,controller-0.localdomain,controller-0,172.17.3.20,controller-0.storage.localdomain,controller-0.storage,172.17.4.18,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.19,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.15,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.106,controller-0.external.localdomain,controller-0.external,192.168.24.7,controller-0.management.localdomain,controller-0.management,192.168.24.7,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/global_vars.yaml"], "changed": false} >2018-07-13 20:46:41,066 p=5867 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.21,ceph-0.localdomain,ceph-0,172.17.3.21,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.19,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.12,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.12,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.12,ceph-0.external.localdomain,ceph-0.external,192.168.24.12,ceph-0.management.localdomain,ceph-0.management,192.168.24.12,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.12,compute-0.localdomain,compute-0,172.17.3.17,compute-0.storage.localdomain,compute-0.storage,192.168.24.8,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.12,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.19,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.8,compute-0.external.localdomain,compute-0.external,192.168.24.8,compute-0.management.localdomain,compute-0.management,192.168.24.8,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.19,controller-0.localdomain,controller-0,172.17.3.20,controller-0.storage.localdomain,controller-0.storage,172.17.4.18,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.19,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.15,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.106,controller-0.external.localdomain,controller-0.external,192.168.24.7,controller-0.management.localdomain,controller-0.management,192.168.24.7,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/global_vars.yaml"], "changed": false} >2018-07-13 20:46:41,072 p=5867 u=mistral | ok: [controller-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.21,ceph-0.localdomain,ceph-0,172.17.3.21,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.19,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.12,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.12,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.12,ceph-0.external.localdomain,ceph-0.external,192.168.24.12,ceph-0.management.localdomain,ceph-0.management,192.168.24.12,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.12,compute-0.localdomain,compute-0,172.17.3.17,compute-0.storage.localdomain,compute-0.storage,192.168.24.8,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.12,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.19,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.8,compute-0.external.localdomain,compute-0.external,192.168.24.8,compute-0.management.localdomain,compute-0.management,192.168.24.8,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.19,controller-0.localdomain,controller-0,172.17.3.20,controller-0.storage.localdomain,controller-0.storage,172.17.4.18,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.19,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.15,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.106,controller-0.external.localdomain,controller-0.external,192.168.24.7,controller-0.management.localdomain,controller-0.management,192.168.24.7,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/global_vars.yaml"], "changed": false} >2018-07-13 20:46:41,085 p=5867 u=mistral | ok: [undercloud] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.21,ceph-0.localdomain,ceph-0,172.17.3.21,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.19,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.12,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.12,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.12,ceph-0.external.localdomain,ceph-0.external,192.168.24.12,ceph-0.management.localdomain,ceph-0.management,192.168.24.12,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.12,compute-0.localdomain,compute-0,172.17.3.17,compute-0.storage.localdomain,compute-0.storage,192.168.24.8,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.12,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.19,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.8,compute-0.external.localdomain,compute-0.external,192.168.24.8,compute-0.management.localdomain,compute-0.management,192.168.24.8,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.19,controller-0.localdomain,controller-0,172.17.3.20,controller-0.storage.localdomain,controller-0.storage,172.17.4.18,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.19,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.15,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.106,controller-0.external.localdomain,controller-0.external,192.168.24.7,controller-0.management.localdomain,controller-0.management,192.168.24.7,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/global_vars.yaml"], "changed": false} >2018-07-13 20:46:41,092 p=5867 u=mistral | PLAY [Common roles for TripleO servers] **************************************** >2018-07-13 20:46:41,113 p=5867 u=mistral | TASK [tripleo-bootstrap : Deploy required packages to bootstrap TripleO] ******* >2018-07-13 20:46:41,113 p=5867 u=mistral | Friday 13 July 2018 20:46:41 -0400 (0:00:00.132) 0:00:04.301 *********** >2018-07-13 20:46:42,042 p=5867 u=mistral | ok: [ceph-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.6.1-0.20180709100740.fdd6a5f.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-07-13 20:46:42,077 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.6.1-0.20180709100740.fdd6a5f.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-07-13 20:46:42,079 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.6.1-0.20180709100740.fdd6a5f.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-07-13 20:46:42,099 p=5867 u=mistral | TASK [tripleo-bootstrap : Create /var/lib/heat-config/tripleo-config-download directory for deployment data] *** >2018-07-13 20:46:42,099 p=5867 u=mistral | Friday 13 July 2018 20:46:42 -0400 (0:00:00.985) 0:00:05.287 *********** >2018-07-13 20:46:42,572 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:46:42,587 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:46:42,590 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:46:42,611 p=5867 u=mistral | TASK [tripleo-ssh-known-hosts : Template /etc/ssh/ssh_known_hosts] ************* >2018-07-13 20:46:42,612 p=5867 u=mistral | Friday 13 July 2018 20:46:42 -0400 (0:00:00.512) 0:00:05.800 *********** >2018-07-13 20:46:43,700 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "1b8f24dbbdd247aa4b3957a02a1f3c17f3912281", "dest": "/etc/ssh/ssh_known_hosts", "gid": 0, "group": "root", "md5sum": "63a6efde4651645579627a2392d1645d", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 1902, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529202.65-234142038780099/source", "state": "file", "uid": 0} >2018-07-13 20:46:43,703 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "1b8f24dbbdd247aa4b3957a02a1f3c17f3912281", "dest": "/etc/ssh/ssh_known_hosts", "gid": 0, "group": "root", "md5sum": "63a6efde4651645579627a2392d1645d", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 1902, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529202.7-259951129665679/source", "state": "file", "uid": 0} >2018-07-13 20:46:43,707 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "1b8f24dbbdd247aa4b3957a02a1f3c17f3912281", "dest": "/etc/ssh/ssh_known_hosts", "gid": 0, "group": "root", "md5sum": "63a6efde4651645579627a2392d1645d", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 1902, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529202.67-66858553933581/source", "state": "file", "uid": 0} >2018-07-13 20:46:43,714 p=5867 u=mistral | PLAY [Overcloud deploy step tasks for step 0] ********************************** >2018-07-13 20:46:43,738 p=5867 u=mistral | TASK [include_role] ************************************************************ >2018-07-13 20:46:43,739 p=5867 u=mistral | Friday 13 July 2018 20:46:43 -0400 (0:00:01.127) 0:00:06.927 *********** >2018-07-13 20:46:43,766 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:46:43,792 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:46:43,806 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:46:43,828 p=5867 u=mistral | TASK [include_role] ************************************************************ >2018-07-13 20:46:43,829 p=5867 u=mistral | Friday 13 July 2018 20:46:43 -0400 (0:00:00.089) 0:00:07.017 *********** >2018-07-13 20:46:43,857 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:46:43,883 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:46:43,900 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:46:43,924 p=5867 u=mistral | TASK [include_role] ************************************************************ >2018-07-13 20:46:43,924 p=5867 u=mistral | Friday 13 July 2018 20:46:43 -0400 (0:00:00.095) 0:00:07.112 *********** >2018-07-13 20:46:43,956 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:46:43,980 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:46:43,993 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:46:44,015 p=5867 u=mistral | TASK [include_role] ************************************************************ >2018-07-13 20:46:44,015 p=5867 u=mistral | Friday 13 July 2018 20:46:44 -0400 (0:00:00.090) 0:00:07.203 *********** >2018-07-13 20:46:44,070 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:46:44,071 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:46:44,081 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:46:44,106 p=5867 u=mistral | TASK [include_role] ************************************************************ >2018-07-13 20:46:44,106 p=5867 u=mistral | Friday 13 July 2018 20:46:44 -0400 (0:00:00.091) 0:00:07.294 *********** >2018-07-13 20:46:44,137 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:46:44,165 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:46:44,180 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:46:44,185 p=5867 u=mistral | PLAY [Server deployments] ****************************************************** >2018-07-13 20:46:44,212 p=5867 u=mistral | TASK [include_tasks] *********************************************************** >2018-07-13 20:46:44,212 p=5867 u=mistral | Friday 13 July 2018 20:46:44 -0400 (0:00:00.105) 0:00:07.400 *********** >2018-07-13 20:46:44,440 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/Controller/deployments.yaml for controller-0 >2018-07-13 20:46:44,448 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/Controller/deployments.yaml for controller-0 >2018-07-13 20:46:44,455 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/Controller/deployments.yaml for controller-0 >2018-07-13 20:46:44,463 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/Controller/deployments.yaml for controller-0 >2018-07-13 20:46:44,470 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/Controller/deployments.yaml for controller-0 >2018-07-13 20:46:44,478 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/Controller/deployments.yaml for controller-0 >2018-07-13 20:46:44,486 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/Controller/deployments.yaml for controller-0 >2018-07-13 20:46:44,494 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/Controller/deployments.yaml for controller-0 >2018-07-13 20:46:44,516 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:46:44,516 p=5867 u=mistral | Friday 13 July 2018 20:46:44 -0400 (0:00:00.304) 0:00:07.704 *********** >2018-07-13 20:46:44,579 p=5867 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "16e32153-dfd8-4498-bc7a-97ac6bc0909f"}, "changed": false} >2018-07-13 20:46:44,603 p=5867 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-07-13 20:46:44,603 p=5867 u=mistral | Friday 13 July 2018 20:46:44 -0400 (0:00:00.086) 0:00:07.791 *********** >2018-07-13 20:46:45,267 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "1745fdf0f9ab68f7bbc5abf1cc55cf5228114e3a", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-16e32153-dfd8-4498-bc7a-97ac6bc0909f", "gid": 0, "group": "root", "md5sum": "59b1c8cba889a2c6489b459ce7e07109", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 10195, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529204.66-41719443091323/source", "state": "file", "uid": 0} >2018-07-13 20:46:45,291 p=5867 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-07-13 20:46:45,291 p=5867 u=mistral | Friday 13 July 2018 20:46:45 -0400 (0:00:00.687) 0:00:08.479 *********** >2018-07-13 20:46:45,644 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:46:45,670 p=5867 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-07-13 20:46:45,670 p=5867 u=mistral | Friday 13 July 2018 20:46:45 -0400 (0:00:00.379) 0:00:08.858 *********** >2018-07-13 20:46:45,689 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:46:45,711 p=5867 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-07-13 20:46:45,711 p=5867 u=mistral | Friday 13 July 2018 20:46:45 -0400 (0:00:00.040) 0:00:08.899 *********** >2018-07-13 20:46:45,729 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:46:45,754 p=5867 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-07-13 20:46:45,754 p=5867 u=mistral | Friday 13 July 2018 20:46:45 -0400 (0:00:00.042) 0:00:08.942 *********** >2018-07-13 20:46:45,773 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:46:45,799 p=5867 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-07-13 20:46:45,800 p=5867 u=mistral | Friday 13 July 2018 20:46:45 -0400 (0:00:00.045) 0:00:08.988 *********** >2018-07-13 20:47:15,168 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/16e32153-dfd8-4498-bc7a-97ac6bc0909f.notify.json)", "delta": "0:00:28.833625", "end": "2018-07-13 20:47:14.736444", "rc": 0, "start": "2018-07-13 20:46:45.902819", "stderr": "[2018-07-13 20:46:45,926] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/16e32153-dfd8-4498-bc7a-97ac6bc0909f.json\n[2018-07-13 20:47:14,289] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.7/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.20/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.18/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.106/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.7/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.20/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.18/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.106/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/07/13 08:46:46 PM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/07/13 08:46:46 PM] [INFO] Ifcfg net config provider created.\\n[2018/07/13 08:46:46 PM] [INFO] Not using any mapping file.\\n[2018/07/13 08:46:46 PM] [INFO] Finding active nics\\n[2018/07/13 08:46:46 PM] [INFO] eth2 is an embedded active nic\\n[2018/07/13 08:46:46 PM] [INFO] eth0 is an embedded active nic\\n[2018/07/13 08:46:46 PM] [INFO] eth1 is an embedded active nic\\n[2018/07/13 08:46:46 PM] [INFO] lo is not an active nic\\n[2018/07/13 08:46:46 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/07/13 08:46:46 PM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/07/13 08:46:46 PM] [INFO] nic3 mapped to: eth2\\n[2018/07/13 08:46:46 PM] [INFO] nic2 mapped to: eth1\\n[2018/07/13 08:46:46 PM] [INFO] nic1 mapped to: eth0\\n[2018/07/13 08:46:46 PM] [INFO] adding interface: eth0\\n[2018/07/13 08:46:46 PM] [INFO] adding custom route for interface: eth0\\n[2018/07/13 08:46:46 PM] [INFO] adding bridge: br-isolated\\n[2018/07/13 08:46:46 PM] [INFO] adding interface: eth1\\n[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan20\\n[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan30\\n[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan40\\n[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan50\\n[2018/07/13 08:46:46 PM] [INFO] adding bridge: br-ex\\n[2018/07/13 08:46:46 PM] [INFO] adding custom route for interface: br-ex\\n[2018/07/13 08:46:46 PM] [INFO] adding interface: eth2\\n[2018/07/13 08:46:46 PM] [INFO] applying network configs...\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan20\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan30\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan40\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan50\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: eth2\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: eth1\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: eth0\\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan50\\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan20\\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan30\\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan40\\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on bridge: br-isolated\\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on bridge: br-ex\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/07/13 08:46:47 PM] [INFO] running ifup on bridge: br-isolated\\n[2018/07/13 08:46:47 PM] [INFO] running ifup on bridge: br-ex\\n[2018/07/13 08:46:51 PM] [INFO] running ifup on interface: eth2\\n[2018/07/13 08:46:51 PM] [INFO] running ifup on interface: eth1\\n[2018/07/13 08:46:52 PM] [INFO] running ifup on interface: eth0\\n[2018/07/13 08:46:56 PM] [INFO] running ifup on interface: vlan50\\n[2018/07/13 08:47:00 PM] [INFO] running ifup on interface: vlan20\\n[2018/07/13 08:47:04 PM] [INFO] running ifup on interface: vlan30\\n[2018/07/13 08:47:08 PM] [INFO] running ifup on interface: vlan40\\n[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan20\\n[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan30\\n[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan40\\n[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-07-13 20:47:14,289] (heat-config) [DEBUG] [2018-07-13 20:46:45,951] (heat-config) [INFO] interface_name=nic1\n[2018-07-13 20:46:45,951] (heat-config) [INFO] bridge_name=br-ex\n[2018-07-13 20:46:45,951] (heat-config) [INFO] deploy_server_id=d78a7938-6926-47b3-9d46-a978a2832924\n[2018-07-13 20:46:45,951] (heat-config) [INFO] deploy_action=CREATE\n[2018-07-13 20:46:45,951] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-j5jh6abemnx3-0-ii4yqcxtgfpd-NetworkDeployment-73vilo3urg5b-TripleOSoftwareDeployment-3vwunttsq7t2/dc82db60-b352-4c98-a7ab-075844b99ab0\n[2018-07-13 20:46:45,951] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-07-13 20:46:45,951] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-07-13 20:46:45,952] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/16e32153-dfd8-4498-bc7a-97ac6bc0909f\n[2018-07-13 20:47:14,284] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS\n\n[2018-07-13 20:47:14,284] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.7/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.20/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.18/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.106/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.7/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.20/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.18/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.106/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/07/13 08:46:46 PM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/07/13 08:46:46 PM] [INFO] Ifcfg net config provider created.\n[2018/07/13 08:46:46 PM] [INFO] Not using any mapping file.\n[2018/07/13 08:46:46 PM] [INFO] Finding active nics\n[2018/07/13 08:46:46 PM] [INFO] eth2 is an embedded active nic\n[2018/07/13 08:46:46 PM] [INFO] eth0 is an embedded active nic\n[2018/07/13 08:46:46 PM] [INFO] eth1 is an embedded active nic\n[2018/07/13 08:46:46 PM] [INFO] lo is not an active nic\n[2018/07/13 08:46:46 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/07/13 08:46:46 PM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/07/13 08:46:46 PM] [INFO] nic3 mapped to: eth2\n[2018/07/13 08:46:46 PM] [INFO] nic2 mapped to: eth1\n[2018/07/13 08:46:46 PM] [INFO] nic1 mapped to: eth0\n[2018/07/13 08:46:46 PM] [INFO] adding interface: eth0\n[2018/07/13 08:46:46 PM] [INFO] adding custom route for interface: eth0\n[2018/07/13 08:46:46 PM] [INFO] adding bridge: br-isolated\n[2018/07/13 08:46:46 PM] [INFO] adding interface: eth1\n[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan20\n[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan30\n[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan40\n[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan50\n[2018/07/13 08:46:46 PM] [INFO] adding bridge: br-ex\n[2018/07/13 08:46:46 PM] [INFO] adding custom route for interface: br-ex\n[2018/07/13 08:46:46 PM] [INFO] adding interface: eth2\n[2018/07/13 08:46:46 PM] [INFO] applying network configs...\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan20\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan30\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan40\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan50\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: eth2\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: eth1\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: eth0\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan50\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan20\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan30\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan40\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on bridge: br-isolated\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on bridge: br-ex\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/07/13 08:46:47 PM] [INFO] running ifup on bridge: br-isolated\n[2018/07/13 08:46:47 PM] [INFO] running ifup on bridge: br-ex\n[2018/07/13 08:46:51 PM] [INFO] running ifup on interface: eth2\n[2018/07/13 08:46:51 PM] [INFO] running ifup on interface: eth1\n[2018/07/13 08:46:52 PM] [INFO] running ifup on interface: eth0\n[2018/07/13 08:46:56 PM] [INFO] running ifup on interface: vlan50\n[2018/07/13 08:47:00 PM] [INFO] running ifup on interface: vlan20\n[2018/07/13 08:47:04 PM] [INFO] running ifup on interface: vlan30\n[2018/07/13 08:47:08 PM] [INFO] running ifup on interface: vlan40\n[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan20\n[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan30\n[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan40\n[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan50\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.3\n++ '[' -n 192.168.24.3 ']'\n++ break\n++ echo 192.168.24.3\n+ local METADATA_IP=192.168.24.3\n+ '[' -n 192.168.24.3 ']'\n+ is_local_ip 192.168.24.3\n+ local IP_TO_CHECK=192.168.24.3\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.3/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\n+ _ping=ping\n+ [[ 192.168.24.3 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.3\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-07-13 20:47:14,284] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/16e32153-dfd8-4498-bc7a-97ac6bc0909f\n\n[2018-07-13 20:47:14,289] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-07-13 20:47:14,290] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/16e32153-dfd8-4498-bc7a-97ac6bc0909f.json < /var/lib/heat-config/deployed/16e32153-dfd8-4498-bc7a-97ac6bc0909f.notify.json\n[2018-07-13 20:47:14,728] (heat-config) [INFO] \n[2018-07-13 20:47:14,728] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:46:45,926] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/16e32153-dfd8-4498-bc7a-97ac6bc0909f.json", "[2018-07-13 20:47:14,289] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.7/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.20/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.18/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.106/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.7/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.20/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.18/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.106/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/07/13 08:46:46 PM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/07/13 08:46:46 PM] [INFO] Ifcfg net config provider created.\\n[2018/07/13 08:46:46 PM] [INFO] Not using any mapping file.\\n[2018/07/13 08:46:46 PM] [INFO] Finding active nics\\n[2018/07/13 08:46:46 PM] [INFO] eth2 is an embedded active nic\\n[2018/07/13 08:46:46 PM] [INFO] eth0 is an embedded active nic\\n[2018/07/13 08:46:46 PM] [INFO] eth1 is an embedded active nic\\n[2018/07/13 08:46:46 PM] [INFO] lo is not an active nic\\n[2018/07/13 08:46:46 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/07/13 08:46:46 PM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/07/13 08:46:46 PM] [INFO] nic3 mapped to: eth2\\n[2018/07/13 08:46:46 PM] [INFO] nic2 mapped to: eth1\\n[2018/07/13 08:46:46 PM] [INFO] nic1 mapped to: eth0\\n[2018/07/13 08:46:46 PM] [INFO] adding interface: eth0\\n[2018/07/13 08:46:46 PM] [INFO] adding custom route for interface: eth0\\n[2018/07/13 08:46:46 PM] [INFO] adding bridge: br-isolated\\n[2018/07/13 08:46:46 PM] [INFO] adding interface: eth1\\n[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan20\\n[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan30\\n[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan40\\n[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan50\\n[2018/07/13 08:46:46 PM] [INFO] adding bridge: br-ex\\n[2018/07/13 08:46:46 PM] [INFO] adding custom route for interface: br-ex\\n[2018/07/13 08:46:46 PM] [INFO] adding interface: eth2\\n[2018/07/13 08:46:46 PM] [INFO] applying network configs...\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan20\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan30\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan40\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan50\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: eth2\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: eth1\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: eth0\\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan50\\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan20\\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan30\\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan40\\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on bridge: br-isolated\\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on bridge: br-ex\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/07/13 08:46:47 PM] [INFO] running ifup on bridge: br-isolated\\n[2018/07/13 08:46:47 PM] [INFO] running ifup on bridge: br-ex\\n[2018/07/13 08:46:51 PM] [INFO] running ifup on interface: eth2\\n[2018/07/13 08:46:51 PM] [INFO] running ifup on interface: eth1\\n[2018/07/13 08:46:52 PM] [INFO] running ifup on interface: eth0\\n[2018/07/13 08:46:56 PM] [INFO] running ifup on interface: vlan50\\n[2018/07/13 08:47:00 PM] [INFO] running ifup on interface: vlan20\\n[2018/07/13 08:47:04 PM] [INFO] running ifup on interface: vlan30\\n[2018/07/13 08:47:08 PM] [INFO] running ifup on interface: vlan40\\n[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan20\\n[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan30\\n[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan40\\n[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-07-13 20:47:14,289] (heat-config) [DEBUG] [2018-07-13 20:46:45,951] (heat-config) [INFO] interface_name=nic1", "[2018-07-13 20:46:45,951] (heat-config) [INFO] bridge_name=br-ex", "[2018-07-13 20:46:45,951] (heat-config) [INFO] deploy_server_id=d78a7938-6926-47b3-9d46-a978a2832924", "[2018-07-13 20:46:45,951] (heat-config) [INFO] deploy_action=CREATE", "[2018-07-13 20:46:45,951] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-j5jh6abemnx3-0-ii4yqcxtgfpd-NetworkDeployment-73vilo3urg5b-TripleOSoftwareDeployment-3vwunttsq7t2/dc82db60-b352-4c98-a7ab-075844b99ab0", "[2018-07-13 20:46:45,951] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-07-13 20:46:45,951] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-07-13 20:46:45,952] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/16e32153-dfd8-4498-bc7a-97ac6bc0909f", "[2018-07-13 20:47:14,284] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", "", "[2018-07-13 20:47:14,284] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.7/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.20/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.18/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.106/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.7/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.20/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.18/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.106/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/07/13 08:46:46 PM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/07/13 08:46:46 PM] [INFO] Ifcfg net config provider created.", "[2018/07/13 08:46:46 PM] [INFO] Not using any mapping file.", "[2018/07/13 08:46:46 PM] [INFO] Finding active nics", "[2018/07/13 08:46:46 PM] [INFO] eth2 is an embedded active nic", "[2018/07/13 08:46:46 PM] [INFO] eth0 is an embedded active nic", "[2018/07/13 08:46:46 PM] [INFO] eth1 is an embedded active nic", "[2018/07/13 08:46:46 PM] [INFO] lo is not an active nic", "[2018/07/13 08:46:46 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/07/13 08:46:46 PM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/07/13 08:46:46 PM] [INFO] nic3 mapped to: eth2", "[2018/07/13 08:46:46 PM] [INFO] nic2 mapped to: eth1", "[2018/07/13 08:46:46 PM] [INFO] nic1 mapped to: eth0", "[2018/07/13 08:46:46 PM] [INFO] adding interface: eth0", "[2018/07/13 08:46:46 PM] [INFO] adding custom route for interface: eth0", "[2018/07/13 08:46:46 PM] [INFO] adding bridge: br-isolated", "[2018/07/13 08:46:46 PM] [INFO] adding interface: eth1", "[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan20", "[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan30", "[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan40", "[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan50", "[2018/07/13 08:46:46 PM] [INFO] adding bridge: br-ex", "[2018/07/13 08:46:46 PM] [INFO] adding custom route for interface: br-ex", "[2018/07/13 08:46:46 PM] [INFO] adding interface: eth2", "[2018/07/13 08:46:46 PM] [INFO] applying network configs...", "[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan20", "[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan30", "[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan40", "[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan50", "[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: eth2", "[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: eth1", "[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: eth0", "[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan50", "[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan20", "[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan30", "[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan40", "[2018/07/13 08:46:47 PM] [INFO] running ifdown on bridge: br-isolated", "[2018/07/13 08:46:47 PM] [INFO] running ifdown on bridge: br-ex", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/07/13 08:46:47 PM] [INFO] running ifup on bridge: br-isolated", "[2018/07/13 08:46:47 PM] [INFO] running ifup on bridge: br-ex", "[2018/07/13 08:46:51 PM] [INFO] running ifup on interface: eth2", "[2018/07/13 08:46:51 PM] [INFO] running ifup on interface: eth1", "[2018/07/13 08:46:52 PM] [INFO] running ifup on interface: eth0", "[2018/07/13 08:46:56 PM] [INFO] running ifup on interface: vlan50", "[2018/07/13 08:47:00 PM] [INFO] running ifup on interface: vlan20", "[2018/07/13 08:47:04 PM] [INFO] running ifup on interface: vlan30", "[2018/07/13 08:47:08 PM] [INFO] running ifup on interface: vlan40", "[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan20", "[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan30", "[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan40", "[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan50", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.3", "++ '[' -n 192.168.24.3 ']'", "++ break", "++ echo 192.168.24.3", "+ local METADATA_IP=192.168.24.3", "+ '[' -n 192.168.24.3 ']'", "+ is_local_ip 192.168.24.3", "+ local IP_TO_CHECK=192.168.24.3", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.3/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", "+ _ping=ping", "+ [[ 192.168.24.3 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.3", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-07-13 20:47:14,284] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/16e32153-dfd8-4498-bc7a-97ac6bc0909f", "", "[2018-07-13 20:47:14,289] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-07-13 20:47:14,290] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/16e32153-dfd8-4498-bc7a-97ac6bc0909f.json < /var/lib/heat-config/deployed/16e32153-dfd8-4498-bc7a-97ac6bc0909f.notify.json", "[2018-07-13 20:47:14,728] (heat-config) [INFO] ", "[2018-07-13 20:47:14,728] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:47:15,194 p=5867 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-07-13 20:47:15,194 p=5867 u=mistral | Friday 13 July 2018 20:47:15 -0400 (0:00:29.394) 0:00:38.382 *********** >2018-07-13 20:47:15,257 p=5867 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:46:45,926] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/16e32153-dfd8-4498-bc7a-97ac6bc0909f.json", > "[2018-07-13 20:47:14,289] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.7/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.20/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.18/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.106/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.7/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.20/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.18/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.106/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/07/13 08:46:46 PM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/07/13 08:46:46 PM] [INFO] Ifcfg net config provider created.\\n[2018/07/13 08:46:46 PM] [INFO] Not using any mapping file.\\n[2018/07/13 08:46:46 PM] [INFO] Finding active nics\\n[2018/07/13 08:46:46 PM] [INFO] eth2 is an embedded active nic\\n[2018/07/13 08:46:46 PM] [INFO] eth0 is an embedded active nic\\n[2018/07/13 08:46:46 PM] [INFO] eth1 is an embedded active nic\\n[2018/07/13 08:46:46 PM] [INFO] lo is not an active nic\\n[2018/07/13 08:46:46 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/07/13 08:46:46 PM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/07/13 08:46:46 PM] [INFO] nic3 mapped to: eth2\\n[2018/07/13 08:46:46 PM] [INFO] nic2 mapped to: eth1\\n[2018/07/13 08:46:46 PM] [INFO] nic1 mapped to: eth0\\n[2018/07/13 08:46:46 PM] [INFO] adding interface: eth0\\n[2018/07/13 08:46:46 PM] [INFO] adding custom route for interface: eth0\\n[2018/07/13 08:46:46 PM] [INFO] adding bridge: br-isolated\\n[2018/07/13 08:46:46 PM] [INFO] adding interface: eth1\\n[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan20\\n[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan30\\n[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan40\\n[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan50\\n[2018/07/13 08:46:46 PM] [INFO] adding bridge: br-ex\\n[2018/07/13 08:46:46 PM] [INFO] adding custom route for interface: br-ex\\n[2018/07/13 08:46:46 PM] [INFO] adding interface: eth2\\n[2018/07/13 08:46:46 PM] [INFO] applying network configs...\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan20\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan30\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan40\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan50\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: eth2\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: eth1\\n[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: eth0\\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan50\\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan20\\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan30\\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan40\\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on bridge: br-isolated\\n[2018/07/13 08:46:47 PM] [INFO] running ifdown on bridge: br-ex\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/07/13 08:46:47 PM] [INFO] running ifup on bridge: br-isolated\\n[2018/07/13 08:46:47 PM] [INFO] running ifup on bridge: br-ex\\n[2018/07/13 08:46:51 PM] [INFO] running ifup on interface: eth2\\n[2018/07/13 08:46:51 PM] [INFO] running ifup on interface: eth1\\n[2018/07/13 08:46:52 PM] [INFO] running ifup on interface: eth0\\n[2018/07/13 08:46:56 PM] [INFO] running ifup on interface: vlan50\\n[2018/07/13 08:47:00 PM] [INFO] running ifup on interface: vlan20\\n[2018/07/13 08:47:04 PM] [INFO] running ifup on interface: vlan30\\n[2018/07/13 08:47:08 PM] [INFO] running ifup on interface: vlan40\\n[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan20\\n[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan30\\n[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan40\\n[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-07-13 20:47:14,289] (heat-config) [DEBUG] [2018-07-13 20:46:45,951] (heat-config) [INFO] interface_name=nic1", > "[2018-07-13 20:46:45,951] (heat-config) [INFO] bridge_name=br-ex", > "[2018-07-13 20:46:45,951] (heat-config) [INFO] deploy_server_id=d78a7938-6926-47b3-9d46-a978a2832924", > "[2018-07-13 20:46:45,951] (heat-config) [INFO] deploy_action=CREATE", > "[2018-07-13 20:46:45,951] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-j5jh6abemnx3-0-ii4yqcxtgfpd-NetworkDeployment-73vilo3urg5b-TripleOSoftwareDeployment-3vwunttsq7t2/dc82db60-b352-4c98-a7ab-075844b99ab0", > "[2018-07-13 20:46:45,951] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-07-13 20:46:45,951] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-07-13 20:46:45,952] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/16e32153-dfd8-4498-bc7a-97ac6bc0909f", > "[2018-07-13 20:47:14,284] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", > "", > "[2018-07-13 20:47:14,284] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.7/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.20/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.18/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.106/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.7/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.20/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.18/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.106/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/07/13 08:46:46 PM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/07/13 08:46:46 PM] [INFO] Ifcfg net config provider created.", > "[2018/07/13 08:46:46 PM] [INFO] Not using any mapping file.", > "[2018/07/13 08:46:46 PM] [INFO] Finding active nics", > "[2018/07/13 08:46:46 PM] [INFO] eth2 is an embedded active nic", > "[2018/07/13 08:46:46 PM] [INFO] eth0 is an embedded active nic", > "[2018/07/13 08:46:46 PM] [INFO] eth1 is an embedded active nic", > "[2018/07/13 08:46:46 PM] [INFO] lo is not an active nic", > "[2018/07/13 08:46:46 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/07/13 08:46:46 PM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/07/13 08:46:46 PM] [INFO] nic3 mapped to: eth2", > "[2018/07/13 08:46:46 PM] [INFO] nic2 mapped to: eth1", > "[2018/07/13 08:46:46 PM] [INFO] nic1 mapped to: eth0", > "[2018/07/13 08:46:46 PM] [INFO] adding interface: eth0", > "[2018/07/13 08:46:46 PM] [INFO] adding custom route for interface: eth0", > "[2018/07/13 08:46:46 PM] [INFO] adding bridge: br-isolated", > "[2018/07/13 08:46:46 PM] [INFO] adding interface: eth1", > "[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan20", > "[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan30", > "[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan40", > "[2018/07/13 08:46:46 PM] [INFO] adding vlan: vlan50", > "[2018/07/13 08:46:46 PM] [INFO] adding bridge: br-ex", > "[2018/07/13 08:46:46 PM] [INFO] adding custom route for interface: br-ex", > "[2018/07/13 08:46:46 PM] [INFO] adding interface: eth2", > "[2018/07/13 08:46:46 PM] [INFO] applying network configs...", > "[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan20", > "[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan30", > "[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan40", > "[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: vlan50", > "[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: eth2", > "[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: eth1", > "[2018/07/13 08:46:46 PM] [INFO] running ifdown on interface: eth0", > "[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan50", > "[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan20", > "[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan30", > "[2018/07/13 08:46:47 PM] [INFO] running ifdown on interface: vlan40", > "[2018/07/13 08:46:47 PM] [INFO] running ifdown on bridge: br-isolated", > "[2018/07/13 08:46:47 PM] [INFO] running ifdown on bridge: br-ex", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/07/13 08:46:47 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/07/13 08:46:47 PM] [INFO] running ifup on bridge: br-isolated", > "[2018/07/13 08:46:47 PM] [INFO] running ifup on bridge: br-ex", > "[2018/07/13 08:46:51 PM] [INFO] running ifup on interface: eth2", > "[2018/07/13 08:46:51 PM] [INFO] running ifup on interface: eth1", > "[2018/07/13 08:46:52 PM] [INFO] running ifup on interface: eth0", > "[2018/07/13 08:46:56 PM] [INFO] running ifup on interface: vlan50", > "[2018/07/13 08:47:00 PM] [INFO] running ifup on interface: vlan20", > "[2018/07/13 08:47:04 PM] [INFO] running ifup on interface: vlan30", > "[2018/07/13 08:47:08 PM] [INFO] running ifup on interface: vlan40", > "[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan20", > "[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan30", > "[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan40", > "[2018/07/13 08:47:13 PM] [INFO] running ifup on interface: vlan50", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.3", > "++ '[' -n 192.168.24.3 ']'", > "++ break", > "++ echo 192.168.24.3", > "+ local METADATA_IP=192.168.24.3", > "+ '[' -n 192.168.24.3 ']'", > "+ is_local_ip 192.168.24.3", > "+ local IP_TO_CHECK=192.168.24.3", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.3/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", > "+ _ping=ping", > "+ [[ 192.168.24.3 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.3", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-07-13 20:47:14,284] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/16e32153-dfd8-4498-bc7a-97ac6bc0909f", > "", > "[2018-07-13 20:47:14,289] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-07-13 20:47:14,290] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/16e32153-dfd8-4498-bc7a-97ac6bc0909f.json < /var/lib/heat-config/deployed/16e32153-dfd8-4498-bc7a-97ac6bc0909f.notify.json", > "[2018-07-13 20:47:14,728] (heat-config) [INFO] ", > "[2018-07-13 20:47:14,728] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:47:15,285 p=5867 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment] ************************* >2018-07-13 20:47:15,285 p=5867 u=mistral | Friday 13 July 2018 20:47:15 -0400 (0:00:00.091) 0:00:38.473 *********** >2018-07-13 20:47:15,302 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:15,326 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:47:15,326 p=5867 u=mistral | Friday 13 July 2018 20:47:15 -0400 (0:00:00.040) 0:00:38.514 *********** >2018-07-13 20:47:15,379 p=5867 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "8762c978-b498-4f7f-b0e2-1bcf7326d172"}, "changed": false} >2018-07-13 20:47:15,402 p=5867 u=mistral | TASK [Render deployment file for ControllerUpgradeInitDeployment] ************** >2018-07-13 20:47:15,402 p=5867 u=mistral | Friday 13 July 2018 20:47:15 -0400 (0:00:00.076) 0:00:38.590 *********** >2018-07-13 20:47:16,063 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "e20b53fc117f4bba7e9017f5e2416e2f19970281", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerUpgradeInitDeployment-8762c978-b498-4f7f-b0e2-1bcf7326d172", "gid": 0, "group": "root", "md5sum": "1e9266dc1e8fcadcdaf13ff2979fe8fd", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1183, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529235.45-162672246065137/source", "state": "file", "uid": 0} >2018-07-13 20:47:16,088 p=5867 u=mistral | TASK [Check if deployed file exists for ControllerUpgradeInitDeployment] ******* >2018-07-13 20:47:16,088 p=5867 u=mistral | Friday 13 July 2018 20:47:16 -0400 (0:00:00.686) 0:00:39.276 *********** >2018-07-13 20:47:16,438 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:47:16,463 p=5867 u=mistral | TASK [Check previous deployment rc for ControllerUpgradeInitDeployment] ******** >2018-07-13 20:47:16,464 p=5867 u=mistral | Friday 13 July 2018 20:47:16 -0400 (0:00:00.375) 0:00:39.651 *********** >2018-07-13 20:47:16,482 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:16,506 p=5867 u=mistral | TASK [Remove deployed file for ControllerUpgradeInitDeployment when previous deployment failed] *** >2018-07-13 20:47:16,507 p=5867 u=mistral | Friday 13 July 2018 20:47:16 -0400 (0:00:00.043) 0:00:39.695 *********** >2018-07-13 20:47:16,525 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:16,548 p=5867 u=mistral | TASK [Force remove deployed file for ControllerUpgradeInitDeployment] ********** >2018-07-13 20:47:16,549 p=5867 u=mistral | Friday 13 July 2018 20:47:16 -0400 (0:00:00.041) 0:00:39.736 *********** >2018-07-13 20:47:16,566 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:16,590 p=5867 u=mistral | TASK [Run deployment ControllerUpgradeInitDeployment] ************************** >2018-07-13 20:47:16,590 p=5867 u=mistral | Friday 13 July 2018 20:47:16 -0400 (0:00:00.041) 0:00:39.778 *********** >2018-07-13 20:47:17,430 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/8762c978-b498-4f7f-b0e2-1bcf7326d172.notify.json)", "delta": "0:00:00.481689", "end": "2018-07-13 20:47:17.015666", "rc": 0, "start": "2018-07-13 20:47:16.533977", "stderr": "[2018-07-13 20:47:16,560] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/8762c978-b498-4f7f-b0e2-1bcf7326d172.json\n[2018-07-13 20:47:16,589] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-07-13 20:47:16,589] (heat-config) [DEBUG] [2018-07-13 20:47:16,581] (heat-config) [INFO] deploy_server_id=d78a7938-6926-47b3-9d46-a978a2832924\n[2018-07-13 20:47:16,581] (heat-config) [INFO] deploy_action=CREATE\n[2018-07-13 20:47:16,581] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-j5jh6abemnx3-0-ii4yqcxtgfpd-ControllerUpgradeInitDeployment-pwdjqav5uxjr/ed803d05-4248-438b-90b0-2175bf25ec88\n[2018-07-13 20:47:16,581] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-07-13 20:47:16,581] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-07-13 20:47:16,581] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/8762c978-b498-4f7f-b0e2-1bcf7326d172\n[2018-07-13 20:47:16,585] (heat-config) [INFO] \n[2018-07-13 20:47:16,586] (heat-config) [DEBUG] \n[2018-07-13 20:47:16,586] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/8762c978-b498-4f7f-b0e2-1bcf7326d172\n\n[2018-07-13 20:47:16,589] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-07-13 20:47:16,589] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8762c978-b498-4f7f-b0e2-1bcf7326d172.json < /var/lib/heat-config/deployed/8762c978-b498-4f7f-b0e2-1bcf7326d172.notify.json\n[2018-07-13 20:47:17,008] (heat-config) [INFO] \n[2018-07-13 20:47:17,009] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:47:16,560] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/8762c978-b498-4f7f-b0e2-1bcf7326d172.json", "[2018-07-13 20:47:16,589] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-07-13 20:47:16,589] (heat-config) [DEBUG] [2018-07-13 20:47:16,581] (heat-config) [INFO] deploy_server_id=d78a7938-6926-47b3-9d46-a978a2832924", "[2018-07-13 20:47:16,581] (heat-config) [INFO] deploy_action=CREATE", "[2018-07-13 20:47:16,581] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-j5jh6abemnx3-0-ii4yqcxtgfpd-ControllerUpgradeInitDeployment-pwdjqav5uxjr/ed803d05-4248-438b-90b0-2175bf25ec88", "[2018-07-13 20:47:16,581] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-07-13 20:47:16,581] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-07-13 20:47:16,581] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/8762c978-b498-4f7f-b0e2-1bcf7326d172", "[2018-07-13 20:47:16,585] (heat-config) [INFO] ", "[2018-07-13 20:47:16,586] (heat-config) [DEBUG] ", "[2018-07-13 20:47:16,586] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/8762c978-b498-4f7f-b0e2-1bcf7326d172", "", "[2018-07-13 20:47:16,589] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-07-13 20:47:16,589] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8762c978-b498-4f7f-b0e2-1bcf7326d172.json < /var/lib/heat-config/deployed/8762c978-b498-4f7f-b0e2-1bcf7326d172.notify.json", "[2018-07-13 20:47:17,008] (heat-config) [INFO] ", "[2018-07-13 20:47:17,009] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:47:17,455 p=5867 u=mistral | TASK [Output for ControllerUpgradeInitDeployment] ****************************** >2018-07-13 20:47:17,455 p=5867 u=mistral | Friday 13 July 2018 20:47:17 -0400 (0:00:00.865) 0:00:40.643 *********** >2018-07-13 20:47:17,507 p=5867 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:47:16,560] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/8762c978-b498-4f7f-b0e2-1bcf7326d172.json", > "[2018-07-13 20:47:16,589] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-07-13 20:47:16,589] (heat-config) [DEBUG] [2018-07-13 20:47:16,581] (heat-config) [INFO] deploy_server_id=d78a7938-6926-47b3-9d46-a978a2832924", > "[2018-07-13 20:47:16,581] (heat-config) [INFO] deploy_action=CREATE", > "[2018-07-13 20:47:16,581] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-j5jh6abemnx3-0-ii4yqcxtgfpd-ControllerUpgradeInitDeployment-pwdjqav5uxjr/ed803d05-4248-438b-90b0-2175bf25ec88", > "[2018-07-13 20:47:16,581] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-07-13 20:47:16,581] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-07-13 20:47:16,581] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/8762c978-b498-4f7f-b0e2-1bcf7326d172", > "[2018-07-13 20:47:16,585] (heat-config) [INFO] ", > "[2018-07-13 20:47:16,586] (heat-config) [DEBUG] ", > "[2018-07-13 20:47:16,586] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/8762c978-b498-4f7f-b0e2-1bcf7326d172", > "", > "[2018-07-13 20:47:16,589] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-07-13 20:47:16,589] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8762c978-b498-4f7f-b0e2-1bcf7326d172.json < /var/lib/heat-config/deployed/8762c978-b498-4f7f-b0e2-1bcf7326d172.notify.json", > "[2018-07-13 20:47:17,008] (heat-config) [INFO] ", > "[2018-07-13 20:47:17,009] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:47:17,532 p=5867 u=mistral | TASK [Check-mode for Run deployment ControllerUpgradeInitDeployment] *********** >2018-07-13 20:47:17,532 p=5867 u=mistral | Friday 13 July 2018 20:47:17 -0400 (0:00:00.076) 0:00:40.720 *********** >2018-07-13 20:47:17,547 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:17,571 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:47:17,571 p=5867 u=mistral | Friday 13 July 2018 20:47:17 -0400 (0:00:00.038) 0:00:40.759 *********** >2018-07-13 20:47:17,927 p=5867 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "34bb0914-1986-4d4f-999b-eee8015d5b77"}, "changed": false} >2018-07-13 20:47:17,953 p=5867 u=mistral | TASK [Render deployment file for ControllerDeployment] ************************* >2018-07-13 20:47:17,954 p=5867 u=mistral | Friday 13 July 2018 20:47:17 -0400 (0:00:00.382) 0:00:41.142 *********** >2018-07-13 20:47:18,967 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "88735bcd813a3438a6d7d4804fc8dd8d71ed3982", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerDeployment-34bb0914-1986-4d4f-999b-eee8015d5b77", "gid": 0, "group": "root", "md5sum": "25c5bfc1d02e0580c0f05d2a7f60bf82", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 73362, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529238.34-17684238272313/source", "state": "file", "uid": 0} >2018-07-13 20:47:18,992 p=5867 u=mistral | TASK [Check if deployed file exists for ControllerDeployment] ****************** >2018-07-13 20:47:18,993 p=5867 u=mistral | Friday 13 July 2018 20:47:18 -0400 (0:00:01.038) 0:00:42.180 *********** >2018-07-13 20:47:19,406 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:47:19,429 p=5867 u=mistral | TASK [Check previous deployment rc for ControllerDeployment] ******************* >2018-07-13 20:47:19,430 p=5867 u=mistral | Friday 13 July 2018 20:47:19 -0400 (0:00:00.436) 0:00:42.617 *********** >2018-07-13 20:47:19,449 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:19,473 p=5867 u=mistral | TASK [Remove deployed file for ControllerDeployment when previous deployment failed] *** >2018-07-13 20:47:19,474 p=5867 u=mistral | Friday 13 July 2018 20:47:19 -0400 (0:00:00.044) 0:00:42.662 *********** >2018-07-13 20:47:19,492 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:19,516 p=5867 u=mistral | TASK [Force remove deployed file for ControllerDeployment] ********************* >2018-07-13 20:47:19,516 p=5867 u=mistral | Friday 13 July 2018 20:47:19 -0400 (0:00:00.042) 0:00:42.704 *********** >2018-07-13 20:47:19,533 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:19,557 p=5867 u=mistral | TASK [Run deployment ControllerDeployment] ************************************* >2018-07-13 20:47:19,557 p=5867 u=mistral | Friday 13 July 2018 20:47:19 -0400 (0:00:00.041) 0:00:42.745 *********** >2018-07-13 20:47:20,531 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/34bb0914-1986-4d4f-999b-eee8015d5b77.notify.json)", "delta": "0:00:00.618147", "end": "2018-07-13 20:47:20.115687", "rc": 0, "start": "2018-07-13 20:47:19.497540", "stderr": "[2018-07-13 20:47:19,531] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/34bb0914-1986-4d4f-999b-eee8015d5b77.json\n[2018-07-13 20:47:19,656] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-07-13 20:47:19,656] (heat-config) [DEBUG] \n[2018-07-13 20:47:19,656] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-07-13 20:47:19,656] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/34bb0914-1986-4d4f-999b-eee8015d5b77.json < /var/lib/heat-config/deployed/34bb0914-1986-4d4f-999b-eee8015d5b77.notify.json\n[2018-07-13 20:47:20,107] (heat-config) [INFO] \n[2018-07-13 20:47:20,108] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:47:19,531] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/34bb0914-1986-4d4f-999b-eee8015d5b77.json", "[2018-07-13 20:47:19,656] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-07-13 20:47:19,656] (heat-config) [DEBUG] ", "[2018-07-13 20:47:19,656] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-07-13 20:47:19,656] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/34bb0914-1986-4d4f-999b-eee8015d5b77.json < /var/lib/heat-config/deployed/34bb0914-1986-4d4f-999b-eee8015d5b77.notify.json", "[2018-07-13 20:47:20,107] (heat-config) [INFO] ", "[2018-07-13 20:47:20,108] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:47:20,556 p=5867 u=mistral | TASK [Output for ControllerDeployment] ***************************************** >2018-07-13 20:47:20,556 p=5867 u=mistral | Friday 13 July 2018 20:47:20 -0400 (0:00:00.998) 0:00:43.744 *********** >2018-07-13 20:47:20,659 p=5867 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:47:19,531] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/34bb0914-1986-4d4f-999b-eee8015d5b77.json", > "[2018-07-13 20:47:19,656] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-07-13 20:47:19,656] (heat-config) [DEBUG] ", > "[2018-07-13 20:47:19,656] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-07-13 20:47:19,656] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/34bb0914-1986-4d4f-999b-eee8015d5b77.json < /var/lib/heat-config/deployed/34bb0914-1986-4d4f-999b-eee8015d5b77.notify.json", > "[2018-07-13 20:47:20,107] (heat-config) [INFO] ", > "[2018-07-13 20:47:20,108] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:47:20,684 p=5867 u=mistral | TASK [Check-mode for Run deployment ControllerDeployment] ********************** >2018-07-13 20:47:20,684 p=5867 u=mistral | Friday 13 July 2018 20:47:20 -0400 (0:00:00.128) 0:00:43.872 *********** >2018-07-13 20:47:20,700 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:20,723 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:47:20,723 p=5867 u=mistral | Friday 13 July 2018 20:47:20 -0400 (0:00:00.038) 0:00:43.911 *********** >2018-07-13 20:47:20,829 p=5867 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "bc49fe03-cd2e-4f01-bb81-9215a1ba3d62"}, "changed": false} >2018-07-13 20:47:20,853 p=5867 u=mistral | TASK [Render deployment file for ControllerHostsDeployment] ******************** >2018-07-13 20:47:20,853 p=5867 u=mistral | Friday 13 July 2018 20:47:20 -0400 (0:00:00.130) 0:00:44.041 *********** >2018-07-13 20:47:21,513 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "97a7d68209994189cb7a432260760e1c2761b231", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerHostsDeployment-bc49fe03-cd2e-4f01-bb81-9215a1ba3d62", "gid": 0, "group": "root", "md5sum": "de639a6442c929cd20c7c8bd7871fbb8", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4429, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529240.91-114053888676027/source", "state": "file", "uid": 0} >2018-07-13 20:47:21,541 p=5867 u=mistral | TASK [Check if deployed file exists for ControllerHostsDeployment] ************* >2018-07-13 20:47:21,541 p=5867 u=mistral | Friday 13 July 2018 20:47:21 -0400 (0:00:00.687) 0:00:44.729 *********** >2018-07-13 20:47:21,951 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:47:21,977 p=5867 u=mistral | TASK [Check previous deployment rc for ControllerHostsDeployment] ************** >2018-07-13 20:47:21,977 p=5867 u=mistral | Friday 13 July 2018 20:47:21 -0400 (0:00:00.436) 0:00:45.165 *********** >2018-07-13 20:47:21,995 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:22,019 p=5867 u=mistral | TASK [Remove deployed file for ControllerHostsDeployment when previous deployment failed] *** >2018-07-13 20:47:22,019 p=5867 u=mistral | Friday 13 July 2018 20:47:22 -0400 (0:00:00.042) 0:00:45.207 *********** >2018-07-13 20:47:22,038 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:22,062 p=5867 u=mistral | TASK [Force remove deployed file for ControllerHostsDeployment] **************** >2018-07-13 20:47:22,062 p=5867 u=mistral | Friday 13 July 2018 20:47:22 -0400 (0:00:00.042) 0:00:45.250 *********** >2018-07-13 20:47:22,079 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:22,104 p=5867 u=mistral | TASK [Run deployment ControllerHostsDeployment] ******************************** >2018-07-13 20:47:22,104 p=5867 u=mistral | Friday 13 July 2018 20:47:22 -0400 (0:00:00.042) 0:00:45.292 *********** >2018-07-13 20:47:23,062 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/bc49fe03-cd2e-4f01-bb81-9215a1ba3d62.notify.json)", "delta": "0:00:00.481071", "end": "2018-07-13 20:47:22.581776", "rc": 0, "start": "2018-07-13 20:47:22.100705", "stderr": "[2018-07-13 20:47:22,125] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/bc49fe03-cd2e-4f01-bb81-9215a1ba3d62.json\n[2018-07-13 20:47:22,177] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-07-13 20:47:22,177] (heat-config) [DEBUG] [2018-07-13 20:47:22,146] (heat-config) [INFO] hosts=192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-07-13 20:47:22,146] (heat-config) [INFO] deploy_server_id=d78a7938-6926-47b3-9d46-a978a2832924\n[2018-07-13 20:47:22,146] (heat-config) [INFO] deploy_action=CREATE\n[2018-07-13 20:47:22,146] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-dvo5gnumoprq-0-g743l2hbwftg/cd7ef924-7c60-4145-a22a-198a58a97a32\n[2018-07-13 20:47:22,146] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-07-13 20:47:22,146] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-07-13 20:47:22,147] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/bc49fe03-cd2e-4f01-bb81-9215a1ba3d62\n[2018-07-13 20:47:22,173] (heat-config) [INFO] \n[2018-07-13 20:47:22,173] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n++ hostname -s\n+ sed -i /controller-0/d /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-07-13 20:47:22,174] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/bc49fe03-cd2e-4f01-bb81-9215a1ba3d62\n\n[2018-07-13 20:47:22,177] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-07-13 20:47:22,178] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/bc49fe03-cd2e-4f01-bb81-9215a1ba3d62.json < /var/lib/heat-config/deployed/bc49fe03-cd2e-4f01-bb81-9215a1ba3d62.notify.json\n[2018-07-13 20:47:22,574] (heat-config) [INFO] \n[2018-07-13 20:47:22,575] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:47:22,125] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/bc49fe03-cd2e-4f01-bb81-9215a1ba3d62.json", "[2018-07-13 20:47:22,177] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-07-13 20:47:22,177] (heat-config) [DEBUG] [2018-07-13 20:47:22,146] (heat-config) [INFO] hosts=192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-07-13 20:47:22,146] (heat-config) [INFO] deploy_server_id=d78a7938-6926-47b3-9d46-a978a2832924", "[2018-07-13 20:47:22,146] (heat-config) [INFO] deploy_action=CREATE", "[2018-07-13 20:47:22,146] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-dvo5gnumoprq-0-g743l2hbwftg/cd7ef924-7c60-4145-a22a-198a58a97a32", "[2018-07-13 20:47:22,146] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-07-13 20:47:22,146] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-07-13 20:47:22,147] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/bc49fe03-cd2e-4f01-bb81-9215a1ba3d62", "[2018-07-13 20:47:22,173] (heat-config) [INFO] ", "[2018-07-13 20:47:22,173] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "++ hostname -s", "+ sed -i /controller-0/d /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-07-13 20:47:22,174] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/bc49fe03-cd2e-4f01-bb81-9215a1ba3d62", "", "[2018-07-13 20:47:22,177] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-07-13 20:47:22,178] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/bc49fe03-cd2e-4f01-bb81-9215a1ba3d62.json < /var/lib/heat-config/deployed/bc49fe03-cd2e-4f01-bb81-9215a1ba3d62.notify.json", "[2018-07-13 20:47:22,574] (heat-config) [INFO] ", "[2018-07-13 20:47:22,575] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:47:23,102 p=5867 u=mistral | TASK [Output for ControllerHostsDeployment] ************************************ >2018-07-13 20:47:23,102 p=5867 u=mistral | Friday 13 July 2018 20:47:23 -0400 (0:00:00.998) 0:00:46.290 *********** >2018-07-13 20:47:23,184 p=5867 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:47:22,125] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/bc49fe03-cd2e-4f01-bb81-9215a1ba3d62.json", > "[2018-07-13 20:47:22,177] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-07-13 20:47:22,177] (heat-config) [DEBUG] [2018-07-13 20:47:22,146] (heat-config) [INFO] hosts=192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-07-13 20:47:22,146] (heat-config) [INFO] deploy_server_id=d78a7938-6926-47b3-9d46-a978a2832924", > "[2018-07-13 20:47:22,146] (heat-config) [INFO] deploy_action=CREATE", > "[2018-07-13 20:47:22,146] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-dvo5gnumoprq-0-g743l2hbwftg/cd7ef924-7c60-4145-a22a-198a58a97a32", > "[2018-07-13 20:47:22,146] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-07-13 20:47:22,146] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-07-13 20:47:22,147] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/bc49fe03-cd2e-4f01-bb81-9215a1ba3d62", > "[2018-07-13 20:47:22,173] (heat-config) [INFO] ", > "[2018-07-13 20:47:22,173] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-07-13 20:47:22,174] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/bc49fe03-cd2e-4f01-bb81-9215a1ba3d62", > "", > "[2018-07-13 20:47:22,177] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-07-13 20:47:22,178] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/bc49fe03-cd2e-4f01-bb81-9215a1ba3d62.json < /var/lib/heat-config/deployed/bc49fe03-cd2e-4f01-bb81-9215a1ba3d62.notify.json", > "[2018-07-13 20:47:22,574] (heat-config) [INFO] ", > "[2018-07-13 20:47:22,575] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:47:23,225 p=5867 u=mistral | TASK [Check-mode for Run deployment ControllerHostsDeployment] ***************** >2018-07-13 20:47:23,225 p=5867 u=mistral | Friday 13 July 2018 20:47:23 -0400 (0:00:00.122) 0:00:46.413 *********** >2018-07-13 20:47:23,240 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:23,262 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:47:23,262 p=5867 u=mistral | Friday 13 July 2018 20:47:23 -0400 (0:00:00.036) 0:00:46.450 *********** >2018-07-13 20:47:23,395 p=5867 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "54d099d5-37e3-48a0-8d73-73bbef521d3f"}, "changed": false} >2018-07-13 20:47:23,420 p=5867 u=mistral | TASK [Render deployment file for ControllerAllNodesDeployment] ***************** >2018-07-13 20:47:23,421 p=5867 u=mistral | Friday 13 July 2018 20:47:23 -0400 (0:00:00.158) 0:00:46.608 *********** >2018-07-13 20:47:24,184 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "0c29cdb0a99bfc73788d9c10b6156aa2286a52d4", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerAllNodesDeployment-54d099d5-37e3-48a0-8d73-73bbef521d3f", "gid": 0, "group": "root", "md5sum": "e38036687d44d3695c5afd27ed7fa385", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19032, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529243.57-159857520031342/source", "state": "file", "uid": 0} >2018-07-13 20:47:24,206 p=5867 u=mistral | TASK [Check if deployed file exists for ControllerAllNodesDeployment] ********** >2018-07-13 20:47:24,206 p=5867 u=mistral | Friday 13 July 2018 20:47:24 -0400 (0:00:00.785) 0:00:47.394 *********** >2018-07-13 20:47:24,554 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:47:24,581 p=5867 u=mistral | TASK [Check previous deployment rc for ControllerAllNodesDeployment] *********** >2018-07-13 20:47:24,581 p=5867 u=mistral | Friday 13 July 2018 20:47:24 -0400 (0:00:00.374) 0:00:47.769 *********** >2018-07-13 20:47:24,599 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:24,623 p=5867 u=mistral | TASK [Remove deployed file for ControllerAllNodesDeployment when previous deployment failed] *** >2018-07-13 20:47:24,624 p=5867 u=mistral | Friday 13 July 2018 20:47:24 -0400 (0:00:00.042) 0:00:47.811 *********** >2018-07-13 20:47:24,642 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:24,667 p=5867 u=mistral | TASK [Force remove deployed file for ControllerAllNodesDeployment] ************* >2018-07-13 20:47:24,667 p=5867 u=mistral | Friday 13 July 2018 20:47:24 -0400 (0:00:00.043) 0:00:47.855 *********** >2018-07-13 20:47:24,685 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:24,710 p=5867 u=mistral | TASK [Run deployment ControllerAllNodesDeployment] ***************************** >2018-07-13 20:47:24,710 p=5867 u=mistral | Friday 13 July 2018 20:47:24 -0400 (0:00:00.043) 0:00:47.898 *********** >2018-07-13 20:47:25,656 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/54d099d5-37e3-48a0-8d73-73bbef521d3f.notify.json)", "delta": "0:00:00.586159", "end": "2018-07-13 20:47:25.241838", "rc": 0, "start": "2018-07-13 20:47:24.655679", "stderr": "[2018-07-13 20:47:24,684] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/54d099d5-37e3-48a0-8d73-73bbef521d3f.json\n[2018-07-13 20:47:24,807] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-07-13 20:47:24,807] (heat-config) [DEBUG] \n[2018-07-13 20:47:24,807] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-07-13 20:47:24,808] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/54d099d5-37e3-48a0-8d73-73bbef521d3f.json < /var/lib/heat-config/deployed/54d099d5-37e3-48a0-8d73-73bbef521d3f.notify.json\n[2018-07-13 20:47:25,234] (heat-config) [INFO] \n[2018-07-13 20:47:25,234] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:47:24,684] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/54d099d5-37e3-48a0-8d73-73bbef521d3f.json", "[2018-07-13 20:47:24,807] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-07-13 20:47:24,807] (heat-config) [DEBUG] ", "[2018-07-13 20:47:24,807] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-07-13 20:47:24,808] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/54d099d5-37e3-48a0-8d73-73bbef521d3f.json < /var/lib/heat-config/deployed/54d099d5-37e3-48a0-8d73-73bbef521d3f.notify.json", "[2018-07-13 20:47:25,234] (heat-config) [INFO] ", "[2018-07-13 20:47:25,234] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:47:25,681 p=5867 u=mistral | TASK [Output for ControllerAllNodesDeployment] ********************************* >2018-07-13 20:47:25,681 p=5867 u=mistral | Friday 13 July 2018 20:47:25 -0400 (0:00:00.970) 0:00:48.869 *********** >2018-07-13 20:47:25,731 p=5867 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:47:24,684] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/54d099d5-37e3-48a0-8d73-73bbef521d3f.json", > "[2018-07-13 20:47:24,807] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-07-13 20:47:24,807] (heat-config) [DEBUG] ", > "[2018-07-13 20:47:24,807] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-07-13 20:47:24,808] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/54d099d5-37e3-48a0-8d73-73bbef521d3f.json < /var/lib/heat-config/deployed/54d099d5-37e3-48a0-8d73-73bbef521d3f.notify.json", > "[2018-07-13 20:47:25,234] (heat-config) [INFO] ", > "[2018-07-13 20:47:25,234] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:47:25,755 p=5867 u=mistral | TASK [Check-mode for Run deployment ControllerAllNodesDeployment] ************** >2018-07-13 20:47:25,755 p=5867 u=mistral | Friday 13 July 2018 20:47:25 -0400 (0:00:00.074) 0:00:48.943 *********** >2018-07-13 20:47:25,770 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:25,794 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:47:25,794 p=5867 u=mistral | Friday 13 July 2018 20:47:25 -0400 (0:00:00.038) 0:00:48.982 *********** >2018-07-13 20:47:25,853 p=5867 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "bb7bdd98-0662-4fd9-8a09-a74e18f1e10f"}, "changed": false} >2018-07-13 20:47:25,876 p=5867 u=mistral | TASK [Render deployment file for ControllerAllNodesValidationDeployment] ******* >2018-07-13 20:47:25,877 p=5867 u=mistral | Friday 13 July 2018 20:47:25 -0400 (0:00:00.082) 0:00:49.064 *********** >2018-07-13 20:47:26,539 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "85e4a6adad15a8d66a93e2f2f5669dc6af7755b6", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerAllNodesValidationDeployment-bb7bdd98-0662-4fd9-8a09-a74e18f1e10f", "gid": 0, "group": "root", "md5sum": "eb57a21905c1427506e3bb4049d1236f", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4940, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529245.94-94967516927940/source", "state": "file", "uid": 0} >2018-07-13 20:47:26,567 p=5867 u=mistral | TASK [Check if deployed file exists for ControllerAllNodesValidationDeployment] *** >2018-07-13 20:47:26,567 p=5867 u=mistral | Friday 13 July 2018 20:47:26 -0400 (0:00:00.690) 0:00:49.755 *********** >2018-07-13 20:47:26,923 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:47:26,948 p=5867 u=mistral | TASK [Check previous deployment rc for ControllerAllNodesValidationDeployment] *** >2018-07-13 20:47:26,948 p=5867 u=mistral | Friday 13 July 2018 20:47:26 -0400 (0:00:00.381) 0:00:50.136 *********** >2018-07-13 20:47:26,966 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:26,989 p=5867 u=mistral | TASK [Remove deployed file for ControllerAllNodesValidationDeployment when previous deployment failed] *** >2018-07-13 20:47:26,989 p=5867 u=mistral | Friday 13 July 2018 20:47:26 -0400 (0:00:00.040) 0:00:50.177 *********** >2018-07-13 20:47:27,007 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:27,029 p=5867 u=mistral | TASK [Force remove deployed file for ControllerAllNodesValidationDeployment] *** >2018-07-13 20:47:27,029 p=5867 u=mistral | Friday 13 July 2018 20:47:27 -0400 (0:00:00.040) 0:00:50.217 *********** >2018-07-13 20:47:27,047 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:27,069 p=5867 u=mistral | TASK [Run deployment ControllerAllNodesValidationDeployment] ******************* >2018-07-13 20:47:27,069 p=5867 u=mistral | Friday 13 July 2018 20:47:27 -0400 (0:00:00.039) 0:00:50.257 *********** >2018-07-13 20:47:28,628 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/bb7bdd98-0662-4fd9-8a09-a74e18f1e10f.notify.json)", "delta": "0:00:01.196142", "end": "2018-07-13 20:47:28.214313", "rc": 0, "start": "2018-07-13 20:47:27.018171", "stderr": "[2018-07-13 20:47:27,046] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/bb7bdd98-0662-4fd9-8a09-a74e18f1e10f.json\n[2018-07-13 20:47:27,768] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.106 for local network 10.0.0.0/24.\\nPing to 10.0.0.106 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.19 for local network 172.17.1.0/24.\\nPing to 172.17.1.19 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\\nPing to 172.17.2.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.20 for local network 172.17.3.0/24.\\nPing to 172.17.3.20 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.18 for local network 172.17.4.0/24.\\nPing to 172.17.4.18 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.7 for local network 192.168.24.0/24.\\nPing to 192.168.24.7 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-07-13 20:47:27,769] (heat-config) [DEBUG] [2018-07-13 20:47:27,066] (heat-config) [INFO] ping_test_ips=172.17.3.20 172.17.4.18 172.17.1.19 172.17.2.15 10.0.0.106 192.168.24.7\n[2018-07-13 20:47:27,067] (heat-config) [INFO] validate_fqdn=False\n[2018-07-13 20:47:27,067] (heat-config) [INFO] validate_ntp=True\n[2018-07-13 20:47:27,067] (heat-config) [INFO] deploy_server_id=d78a7938-6926-47b3-9d46-a978a2832924\n[2018-07-13 20:47:27,067] (heat-config) [INFO] deploy_action=CREATE\n[2018-07-13 20:47:27,067] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-jaz5v56iun5j-0-sb7qyltotbxm/8aa26689-a0f3-4f42-aec6-78af08653bce\n[2018-07-13 20:47:27,067] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-07-13 20:47:27,067] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-07-13 20:47:27,067] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/bb7bdd98-0662-4fd9-8a09-a74e18f1e10f\n[2018-07-13 20:47:27,764] (heat-config) [INFO] Trying to ping 10.0.0.106 for local network 10.0.0.0/24.\nPing to 10.0.0.106 succeeded.\nSUCCESS\nTrying to ping 172.17.1.19 for local network 172.17.1.0/24.\nPing to 172.17.1.19 succeeded.\nSUCCESS\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\nPing to 172.17.2.15 succeeded.\nSUCCESS\nTrying to ping 172.17.3.20 for local network 172.17.3.0/24.\nPing to 172.17.3.20 succeeded.\nSUCCESS\nTrying to ping 172.17.4.18 for local network 172.17.4.0/24.\nPing to 172.17.4.18 succeeded.\nSUCCESS\nTrying to ping 192.168.24.7 for local network 192.168.24.0/24.\nPing to 192.168.24.7 succeeded.\nSUCCESS\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\nSUCCESS\n\n[2018-07-13 20:47:27,764] (heat-config) [DEBUG] \n[2018-07-13 20:47:27,765] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/bb7bdd98-0662-4fd9-8a09-a74e18f1e10f\n\n[2018-07-13 20:47:27,769] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-07-13 20:47:27,769] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/bb7bdd98-0662-4fd9-8a09-a74e18f1e10f.json < /var/lib/heat-config/deployed/bb7bdd98-0662-4fd9-8a09-a74e18f1e10f.notify.json\n[2018-07-13 20:47:28,207] (heat-config) [INFO] \n[2018-07-13 20:47:28,208] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:47:27,046] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/bb7bdd98-0662-4fd9-8a09-a74e18f1e10f.json", "[2018-07-13 20:47:27,768] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.106 for local network 10.0.0.0/24.\\nPing to 10.0.0.106 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.19 for local network 172.17.1.0/24.\\nPing to 172.17.1.19 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\\nPing to 172.17.2.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.20 for local network 172.17.3.0/24.\\nPing to 172.17.3.20 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.18 for local network 172.17.4.0/24.\\nPing to 172.17.4.18 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.7 for local network 192.168.24.0/24.\\nPing to 192.168.24.7 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-07-13 20:47:27,769] (heat-config) [DEBUG] [2018-07-13 20:47:27,066] (heat-config) [INFO] ping_test_ips=172.17.3.20 172.17.4.18 172.17.1.19 172.17.2.15 10.0.0.106 192.168.24.7", "[2018-07-13 20:47:27,067] (heat-config) [INFO] validate_fqdn=False", "[2018-07-13 20:47:27,067] (heat-config) [INFO] validate_ntp=True", "[2018-07-13 20:47:27,067] (heat-config) [INFO] deploy_server_id=d78a7938-6926-47b3-9d46-a978a2832924", "[2018-07-13 20:47:27,067] (heat-config) [INFO] deploy_action=CREATE", "[2018-07-13 20:47:27,067] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-jaz5v56iun5j-0-sb7qyltotbxm/8aa26689-a0f3-4f42-aec6-78af08653bce", "[2018-07-13 20:47:27,067] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-07-13 20:47:27,067] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-07-13 20:47:27,067] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/bb7bdd98-0662-4fd9-8a09-a74e18f1e10f", "[2018-07-13 20:47:27,764] (heat-config) [INFO] Trying to ping 10.0.0.106 for local network 10.0.0.0/24.", "Ping to 10.0.0.106 succeeded.", "SUCCESS", "Trying to ping 172.17.1.19 for local network 172.17.1.0/24.", "Ping to 172.17.1.19 succeeded.", "SUCCESS", "Trying to ping 172.17.2.15 for local network 172.17.2.0/24.", "Ping to 172.17.2.15 succeeded.", "SUCCESS", "Trying to ping 172.17.3.20 for local network 172.17.3.0/24.", "Ping to 172.17.3.20 succeeded.", "SUCCESS", "Trying to ping 172.17.4.18 for local network 172.17.4.0/24.", "Ping to 172.17.4.18 succeeded.", "SUCCESS", "Trying to ping 192.168.24.7 for local network 192.168.24.0/24.", "Ping to 192.168.24.7 succeeded.", "SUCCESS", "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", "SUCCESS", "", "[2018-07-13 20:47:27,764] (heat-config) [DEBUG] ", "[2018-07-13 20:47:27,765] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/bb7bdd98-0662-4fd9-8a09-a74e18f1e10f", "", "[2018-07-13 20:47:27,769] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-07-13 20:47:27,769] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/bb7bdd98-0662-4fd9-8a09-a74e18f1e10f.json < /var/lib/heat-config/deployed/bb7bdd98-0662-4fd9-8a09-a74e18f1e10f.notify.json", "[2018-07-13 20:47:28,207] (heat-config) [INFO] ", "[2018-07-13 20:47:28,208] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:47:28,655 p=5867 u=mistral | TASK [Output for ControllerAllNodesValidationDeployment] *********************** >2018-07-13 20:47:28,655 p=5867 u=mistral | Friday 13 July 2018 20:47:28 -0400 (0:00:01.585) 0:00:51.843 *********** >2018-07-13 20:47:28,706 p=5867 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:47:27,046] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/bb7bdd98-0662-4fd9-8a09-a74e18f1e10f.json", > "[2018-07-13 20:47:27,768] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.106 for local network 10.0.0.0/24.\\nPing to 10.0.0.106 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.19 for local network 172.17.1.0/24.\\nPing to 172.17.1.19 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\\nPing to 172.17.2.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.20 for local network 172.17.3.0/24.\\nPing to 172.17.3.20 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.18 for local network 172.17.4.0/24.\\nPing to 172.17.4.18 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.7 for local network 192.168.24.0/24.\\nPing to 192.168.24.7 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-07-13 20:47:27,769] (heat-config) [DEBUG] [2018-07-13 20:47:27,066] (heat-config) [INFO] ping_test_ips=172.17.3.20 172.17.4.18 172.17.1.19 172.17.2.15 10.0.0.106 192.168.24.7", > "[2018-07-13 20:47:27,067] (heat-config) [INFO] validate_fqdn=False", > "[2018-07-13 20:47:27,067] (heat-config) [INFO] validate_ntp=True", > "[2018-07-13 20:47:27,067] (heat-config) [INFO] deploy_server_id=d78a7938-6926-47b3-9d46-a978a2832924", > "[2018-07-13 20:47:27,067] (heat-config) [INFO] deploy_action=CREATE", > "[2018-07-13 20:47:27,067] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-jaz5v56iun5j-0-sb7qyltotbxm/8aa26689-a0f3-4f42-aec6-78af08653bce", > "[2018-07-13 20:47:27,067] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-07-13 20:47:27,067] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-07-13 20:47:27,067] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/bb7bdd98-0662-4fd9-8a09-a74e18f1e10f", > "[2018-07-13 20:47:27,764] (heat-config) [INFO] Trying to ping 10.0.0.106 for local network 10.0.0.0/24.", > "Ping to 10.0.0.106 succeeded.", > "SUCCESS", > "Trying to ping 172.17.1.19 for local network 172.17.1.0/24.", > "Ping to 172.17.1.19 succeeded.", > "SUCCESS", > "Trying to ping 172.17.2.15 for local network 172.17.2.0/24.", > "Ping to 172.17.2.15 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.20 for local network 172.17.3.0/24.", > "Ping to 172.17.3.20 succeeded.", > "SUCCESS", > "Trying to ping 172.17.4.18 for local network 172.17.4.0/24.", > "Ping to 172.17.4.18 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.7 for local network 192.168.24.0/24.", > "Ping to 192.168.24.7 succeeded.", > "SUCCESS", > "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", > "SUCCESS", > "", > "[2018-07-13 20:47:27,764] (heat-config) [DEBUG] ", > "[2018-07-13 20:47:27,765] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/bb7bdd98-0662-4fd9-8a09-a74e18f1e10f", > "", > "[2018-07-13 20:47:27,769] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-07-13 20:47:27,769] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/bb7bdd98-0662-4fd9-8a09-a74e18f1e10f.json < /var/lib/heat-config/deployed/bb7bdd98-0662-4fd9-8a09-a74e18f1e10f.notify.json", > "[2018-07-13 20:47:28,207] (heat-config) [INFO] ", > "[2018-07-13 20:47:28,208] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:47:28,732 p=5867 u=mistral | TASK [Check-mode for Run deployment ControllerAllNodesValidationDeployment] **** >2018-07-13 20:47:28,732 p=5867 u=mistral | Friday 13 July 2018 20:47:28 -0400 (0:00:00.077) 0:00:51.920 *********** >2018-07-13 20:47:28,748 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:28,771 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:47:28,772 p=5867 u=mistral | Friday 13 July 2018 20:47:28 -0400 (0:00:00.039) 0:00:51.960 *********** >2018-07-13 20:47:28,824 p=5867 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "d97fea0c-847c-4d6a-93f4-1366d6eee682"}, "changed": false} >2018-07-13 20:47:28,849 p=5867 u=mistral | TASK [Render deployment file for ControllerArtifactsDeploy] ******************** >2018-07-13 20:47:28,849 p=5867 u=mistral | Friday 13 July 2018 20:47:28 -0400 (0:00:00.077) 0:00:52.037 *********** >2018-07-13 20:47:29,506 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "5b94f448646f66e0f8fa51aa5f64b5f2c2214204", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerArtifactsDeploy-d97fea0c-847c-4d6a-93f4-1366d6eee682", "gid": 0, "group": "root", "md5sum": "90907bb18f8237ea4abdf03ded428aca", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2021, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529248.9-46117800787283/source", "state": "file", "uid": 0} >2018-07-13 20:47:29,530 p=5867 u=mistral | TASK [Check if deployed file exists for ControllerArtifactsDeploy] ************* >2018-07-13 20:47:29,531 p=5867 u=mistral | Friday 13 July 2018 20:47:29 -0400 (0:00:00.681) 0:00:52.719 *********** >2018-07-13 20:47:29,889 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:47:29,913 p=5867 u=mistral | TASK [Check previous deployment rc for ControllerArtifactsDeploy] ************** >2018-07-13 20:47:29,913 p=5867 u=mistral | Friday 13 July 2018 20:47:29 -0400 (0:00:00.382) 0:00:53.101 *********** >2018-07-13 20:47:29,932 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:29,955 p=5867 u=mistral | TASK [Remove deployed file for ControllerArtifactsDeploy when previous deployment failed] *** >2018-07-13 20:47:29,955 p=5867 u=mistral | Friday 13 July 2018 20:47:29 -0400 (0:00:00.041) 0:00:53.143 *********** >2018-07-13 20:47:29,973 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:29,996 p=5867 u=mistral | TASK [Force remove deployed file for ControllerArtifactsDeploy] **************** >2018-07-13 20:47:29,996 p=5867 u=mistral | Friday 13 July 2018 20:47:29 -0400 (0:00:00.041) 0:00:53.184 *********** >2018-07-13 20:47:30,012 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:30,034 p=5867 u=mistral | TASK [Run deployment ControllerArtifactsDeploy] ******************************** >2018-07-13 20:47:30,034 p=5867 u=mistral | Friday 13 July 2018 20:47:30 -0400 (0:00:00.038) 0:00:53.222 *********** >2018-07-13 20:47:30,869 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/d97fea0c-847c-4d6a-93f4-1366d6eee682.notify.json)", "delta": "0:00:00.484551", "end": "2018-07-13 20:47:30.456445", "rc": 0, "start": "2018-07-13 20:47:29.971894", "stderr": "[2018-07-13 20:47:29,997] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/d97fea0c-847c-4d6a-93f4-1366d6eee682.json\n[2018-07-13 20:47:30,027] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-07-13 20:47:30,027] (heat-config) [DEBUG] [2018-07-13 20:47:30,018] (heat-config) [INFO] artifact_urls=\n[2018-07-13 20:47:30,018] (heat-config) [INFO] deploy_server_id=d78a7938-6926-47b3-9d46-a978a2832924\n[2018-07-13 20:47:30,018] (heat-config) [INFO] deploy_action=CREATE\n[2018-07-13 20:47:30,018] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-nwaxeaw6ioho-ControllerArtifactsDeploy-2cki5cs6v2ud-0-jqsi2a4q77ao/ab32b0ce-de2b-48c9-b4d1-5bf87ec3bd67\n[2018-07-13 20:47:30,018] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-07-13 20:47:30,018] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-07-13 20:47:30,019] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/d97fea0c-847c-4d6a-93f4-1366d6eee682\n[2018-07-13 20:47:30,024] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-07-13 20:47:30,024] (heat-config) [DEBUG] \n[2018-07-13 20:47:30,024] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/d97fea0c-847c-4d6a-93f4-1366d6eee682\n\n[2018-07-13 20:47:30,027] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-07-13 20:47:30,027] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d97fea0c-847c-4d6a-93f4-1366d6eee682.json < /var/lib/heat-config/deployed/d97fea0c-847c-4d6a-93f4-1366d6eee682.notify.json\n[2018-07-13 20:47:30,450] (heat-config) [INFO] \n[2018-07-13 20:47:30,450] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:47:29,997] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/d97fea0c-847c-4d6a-93f4-1366d6eee682.json", "[2018-07-13 20:47:30,027] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-07-13 20:47:30,027] (heat-config) [DEBUG] [2018-07-13 20:47:30,018] (heat-config) [INFO] artifact_urls=", "[2018-07-13 20:47:30,018] (heat-config) [INFO] deploy_server_id=d78a7938-6926-47b3-9d46-a978a2832924", "[2018-07-13 20:47:30,018] (heat-config) [INFO] deploy_action=CREATE", "[2018-07-13 20:47:30,018] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-nwaxeaw6ioho-ControllerArtifactsDeploy-2cki5cs6v2ud-0-jqsi2a4q77ao/ab32b0ce-de2b-48c9-b4d1-5bf87ec3bd67", "[2018-07-13 20:47:30,018] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-07-13 20:47:30,018] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-07-13 20:47:30,019] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/d97fea0c-847c-4d6a-93f4-1366d6eee682", "[2018-07-13 20:47:30,024] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-07-13 20:47:30,024] (heat-config) [DEBUG] ", "[2018-07-13 20:47:30,024] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/d97fea0c-847c-4d6a-93f4-1366d6eee682", "", "[2018-07-13 20:47:30,027] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-07-13 20:47:30,027] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d97fea0c-847c-4d6a-93f4-1366d6eee682.json < /var/lib/heat-config/deployed/d97fea0c-847c-4d6a-93f4-1366d6eee682.notify.json", "[2018-07-13 20:47:30,450] (heat-config) [INFO] ", "[2018-07-13 20:47:30,450] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:47:30,892 p=5867 u=mistral | TASK [Output for ControllerArtifactsDeploy] ************************************ >2018-07-13 20:47:30,892 p=5867 u=mistral | Friday 13 July 2018 20:47:30 -0400 (0:00:00.858) 0:00:54.080 *********** >2018-07-13 20:47:30,939 p=5867 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:47:29,997] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/d97fea0c-847c-4d6a-93f4-1366d6eee682.json", > "[2018-07-13 20:47:30,027] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-07-13 20:47:30,027] (heat-config) [DEBUG] [2018-07-13 20:47:30,018] (heat-config) [INFO] artifact_urls=", > "[2018-07-13 20:47:30,018] (heat-config) [INFO] deploy_server_id=d78a7938-6926-47b3-9d46-a978a2832924", > "[2018-07-13 20:47:30,018] (heat-config) [INFO] deploy_action=CREATE", > "[2018-07-13 20:47:30,018] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-nwaxeaw6ioho-ControllerArtifactsDeploy-2cki5cs6v2ud-0-jqsi2a4q77ao/ab32b0ce-de2b-48c9-b4d1-5bf87ec3bd67", > "[2018-07-13 20:47:30,018] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-07-13 20:47:30,018] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-07-13 20:47:30,019] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/d97fea0c-847c-4d6a-93f4-1366d6eee682", > "[2018-07-13 20:47:30,024] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-07-13 20:47:30,024] (heat-config) [DEBUG] ", > "[2018-07-13 20:47:30,024] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/d97fea0c-847c-4d6a-93f4-1366d6eee682", > "", > "[2018-07-13 20:47:30,027] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-07-13 20:47:30,027] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d97fea0c-847c-4d6a-93f4-1366d6eee682.json < /var/lib/heat-config/deployed/d97fea0c-847c-4d6a-93f4-1366d6eee682.notify.json", > "[2018-07-13 20:47:30,450] (heat-config) [INFO] ", > "[2018-07-13 20:47:30,450] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:47:30,962 p=5867 u=mistral | TASK [Check-mode for Run deployment ControllerArtifactsDeploy] ***************** >2018-07-13 20:47:30,962 p=5867 u=mistral | Friday 13 July 2018 20:47:30 -0400 (0:00:00.069) 0:00:54.150 *********** >2018-07-13 20:47:30,976 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:30,997 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:47:30,997 p=5867 u=mistral | Friday 13 July 2018 20:47:30 -0400 (0:00:00.035) 0:00:54.185 *********** >2018-07-13 20:47:31,093 p=5867 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "a1478809-bdf9-4392-929d-2976d31bc216"}, "changed": false} >2018-07-13 20:47:31,116 p=5867 u=mistral | TASK [Render deployment file for ControllerHostPrepDeployment] ***************** >2018-07-13 20:47:31,116 p=5867 u=mistral | Friday 13 July 2018 20:47:31 -0400 (0:00:00.118) 0:00:54.304 *********** >2018-07-13 20:47:31,813 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "d9797afa43b358dc1cc2778a0ada6a6c5e239d57", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerHostPrepDeployment-a1478809-bdf9-4392-929d-2976d31bc216", "gid": 0, "group": "root", "md5sum": "9cf9ec08e8eae6973faea45102dd003f", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 45917, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529251.22-274373380911485/source", "state": "file", "uid": 0} >2018-07-13 20:47:31,837 p=5867 u=mistral | TASK [Check if deployed file exists for ControllerHostPrepDeployment] ********** >2018-07-13 20:47:31,837 p=5867 u=mistral | Friday 13 July 2018 20:47:31 -0400 (0:00:00.720) 0:00:55.025 *********** >2018-07-13 20:47:32,184 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:47:32,207 p=5867 u=mistral | TASK [Check previous deployment rc for ControllerHostPrepDeployment] *********** >2018-07-13 20:47:32,207 p=5867 u=mistral | Friday 13 July 2018 20:47:32 -0400 (0:00:00.370) 0:00:55.395 *********** >2018-07-13 20:47:32,225 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:32,247 p=5867 u=mistral | TASK [Remove deployed file for ControllerHostPrepDeployment when previous deployment failed] *** >2018-07-13 20:47:32,247 p=5867 u=mistral | Friday 13 July 2018 20:47:32 -0400 (0:00:00.040) 0:00:55.435 *********** >2018-07-13 20:47:32,265 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:32,287 p=5867 u=mistral | TASK [Force remove deployed file for ControllerHostPrepDeployment] ************* >2018-07-13 20:47:32,287 p=5867 u=mistral | Friday 13 July 2018 20:47:32 -0400 (0:00:00.039) 0:00:55.475 *********** >2018-07-13 20:47:32,304 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:47:32,326 p=5867 u=mistral | TASK [Run deployment ControllerHostPrepDeployment] ***************************** >2018-07-13 20:47:32,326 p=5867 u=mistral | Friday 13 July 2018 20:47:32 -0400 (0:00:00.039) 0:00:55.514 *********** >2018-07-13 20:48:01,948 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/a1478809-bdf9-4392-929d-2976d31bc216.notify.json)", "delta": "0:00:30.190250", "end": "2018-07-13 20:48:02.445968", "rc": 0, "start": "2018-07-13 20:47:32.255718", "stderr": "[2018-07-13 20:47:32,280] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/a1478809-bdf9-4392-929d-2976d31bc216.json\n[2018-07-13 20:48:02,029] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/aodh)\\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\\n\\nTASK [aodh logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b6cf6dbe054f430c33d39c1a1a88593536d6e659\\\", \\\"msg\\\": \\\"Destination directory /var/log/aodh does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/cinder)\\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\\n\\nTASK [cinder logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\\\", \\\"msg\\\": \\\"Destination directory /var/log/cinder does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/cinder)\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\nok: [localhost] => (item=/var/lib/cinder)\\n\\nTASK [cinder_enable_iscsi_backend fact] ****************************************\\nok: [localhost]\\n\\nTASK [cinder create LVM volume group dd] ***************************************\\nskipping: [localhost]\\n\\nTASK [cinder create LVM volume group] ******************************************\\nskipping: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/glance)\\n\\nTASK [glance logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"e368ae3272baeb19e1113009ea5dae00e797c919\\\", \\\"msg\\\": \\\"Destination directory /var/log/glance does not exist\\\"}\\n...ignoring\\n\\nTASK [set_fact] ****************************************************************\\nskipping: [localhost]\\n\\nTASK [file] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [stat] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [copy] ********************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \\n\\nTASK [mount] *******************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \\n\\nTASK [Mount NFS on host] *******************************************************\\nskipping: [localhost]\\n\\nTASK [Mount Node Staging Location] *********************************************\\nskipping: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\\n\\nTASK [gnocchi logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\\\", \\\"msg\\\": \\\"Destination directory /var/log/gnocchi does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [get parameters] **********************************************************\\nok: [localhost]\\n\\nTASK [get DeployedSSLCertificatePath attributes] *******************************\\nskipping: [localhost]\\n\\nTASK [Assign bootstrap node] ***************************************************\\nskipping: [localhost]\\n\\nTASK [set is_bootstrap_node fact] **********************************************\\nskipping: [localhost]\\n\\nTASK [get haproxy status] ******************************************************\\nskipping: [localhost]\\n\\nTASK [get pacemaker status] ****************************************************\\nskipping: [localhost]\\n\\nTASK [get docker status] *******************************************************\\nskipping: [localhost]\\n\\nTASK [get container_id] ********************************************************\\nskipping: [localhost]\\n\\nTASK [get pcs resource name for haproxy container] *****************************\\nskipping: [localhost]\\n\\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\\nskipping: [localhost]\\n\\nTASK [push certificate content] ************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate ownership] ***********************************************\\nskipping: [localhost]\\n\\nTASK [reload haproxy if enabled] ***********************************************\\nskipping: [localhost]\\n\\nTASK [restart pacemaker resource for haproxy] **********************************\\nskipping: [localhost]\\n\\nTASK [set kolla_dir fact] ******************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate group on host via container] *****************************\\nskipping: [localhost]\\n\\nTASK [copy certificate from kolla directory to final location] *****************\\nskipping: [localhost]\\n\\nTASK [send restart order to haproxy container] *********************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/haproxy)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\\n\\nTASK [heat logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"d30ca3bda176434d31659e7379616dd162ddb246\\\", \\\"msg\\\": \\\"Destination directory /var/log/heat does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/horizon)\\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\\n\\nTASK [horizon logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ac324739761cb36b925d6e309482e26f7fe49b91\\\", \\\"msg\\\": \\\"Destination directory /var/log/horizon does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/keystone)\\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\\n\\nTASK [keystone logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"910be882addb6df99267e9bd303f6d9bf658562e\\\", \\\"msg\\\": \\\"Destination directory /var/log/keystone does not exist\\\"}\\n...ignoring\\n\\nTASK [memcached logs readme] ***************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/log/containers/mysql)\\nok: [localhost] => (item=/var/lib/mysql)\\n\\nTASK [mysql logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [create /var/lib/neutron] *************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\\n\\nTASK [NTP settings] ************************************************************\\nok: [localhost]\\n\\nTASK [Install ntpdate] *********************************************************\\nskipping: [localhost]\\n\\nTASK [Ensure system is NTP time synced] ****************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/panko)\\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\\n\\nTASK [panko logs readme] *******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"903397bbd82e9b1f53087e3d7e8975d851857ce2\\\", \\\"msg\\\": \\\"Destination directory /var/log/panko does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/rabbitmq)\\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\\n\\nTASK [rabbitmq logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ee241f2199f264c9d0f384cf389fe255e8bf8a77\\\", \\\"msg\\\": \\\"Destination directory /var/log/rabbitmq does not exist\\\"}\\n...ignoring\\n\\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/redis)\\nchanged: [localhost] => (item=/var/log/containers/redis)\\nok: [localhost] => (item=/var/run/redis)\\n\\nTASK [redis logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create /var/lib/sahara] **************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent sahara logs directory] *********************************\\nchanged: [localhost]\\n\\nTASK [sahara logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b0212a1177fa4a88502d17a1cbc31198040cf047\\\", \\\"msg\\\": \\\"Destination directory /var/log/sahara does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/srv/node)\\nchanged: [localhost] => (item=/var/log/swift)\\n\\nTASK [Create swift logging symlink] ********************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/srv/node)\\nok: [localhost] => (item=/var/log/swift)\\nok: [localhost] => (item=/var/log/containers)\\n\\nTASK [Set swift_use_local_disks fact] ******************************************\\nok: [localhost]\\n\\nTASK [Create Swift d1 directory if needed] *************************************\\nchanged: [localhost]\\n\\nTASK [swift logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [Format SwiftRawDisks] ****************************************************\\n\\nTASK [Mount devices defined in SwiftRawDisks] **********************************\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=61 changed=33 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-07-13 20:48:02,029] (heat-config) [DEBUG] [2018-07-13 20:47:32,302] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/a1478809-bdf9-4392-929d-2976d31bc216_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/a1478809-bdf9-4392-929d-2976d31bc216_variables.json\n[2018-07-13 20:48:02,025] (heat-config) [INFO] Return code 0\n[2018-07-13 20:48:02,025] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/aodh)\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\n\nTASK [aodh logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b6cf6dbe054f430c33d39c1a1a88593536d6e659\", \"msg\": \"Destination directory /var/log/aodh does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [ceilometer logs readme] **************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/cinder)\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\n\nTASK [cinder logs readme] ******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\", \"msg\": \"Destination directory /var/log/cinder does not exist\"}\n...ignoring\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/lib/cinder)\nok: [localhost] => (item=/var/log/containers/cinder)\n\nTASK [ensure ceph configurations exist] ****************************************\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/log/containers/cinder)\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/log/containers/cinder)\nok: [localhost] => (item=/var/lib/cinder)\n\nTASK [cinder_enable_iscsi_backend fact] ****************************************\nok: [localhost]\n\nTASK [cinder create LVM volume group dd] ***************************************\nskipping: [localhost]\n\nTASK [cinder create LVM volume group] ******************************************\nskipping: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/glance)\n\nTASK [glance logs readme] ******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"e368ae3272baeb19e1113009ea5dae00e797c919\", \"msg\": \"Destination directory /var/log/glance does not exist\"}\n...ignoring\n\nTASK [set_fact] ****************************************************************\nskipping: [localhost]\n\nTASK [file] ********************************************************************\nskipping: [localhost]\n\nTASK [stat] ********************************************************************\nskipping: [localhost]\n\nTASK [copy] ********************************************************************\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \n\nTASK [mount] *******************************************************************\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \n\nTASK [Mount NFS on host] *******************************************************\nskipping: [localhost]\n\nTASK [Mount Node Staging Location] *********************************************\nskipping: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\n\nTASK [gnocchi logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\", \"msg\": \"Destination directory /var/log/gnocchi does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [get parameters] **********************************************************\nok: [localhost]\n\nTASK [get DeployedSSLCertificatePath attributes] *******************************\nskipping: [localhost]\n\nTASK [Assign bootstrap node] ***************************************************\nskipping: [localhost]\n\nTASK [set is_bootstrap_node fact] **********************************************\nskipping: [localhost]\n\nTASK [get haproxy status] ******************************************************\nskipping: [localhost]\n\nTASK [get pacemaker status] ****************************************************\nskipping: [localhost]\n\nTASK [get docker status] *******************************************************\nskipping: [localhost]\n\nTASK [get container_id] ********************************************************\nskipping: [localhost]\n\nTASK [get pcs resource name for haproxy container] *****************************\nskipping: [localhost]\n\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\nskipping: [localhost]\n\nTASK [push certificate content] ************************************************\nskipping: [localhost]\n\nTASK [set certificate ownership] ***********************************************\nskipping: [localhost]\n\nTASK [reload haproxy if enabled] ***********************************************\nskipping: [localhost]\n\nTASK [restart pacemaker resource for haproxy] **********************************\nskipping: [localhost]\n\nTASK [set kolla_dir fact] ******************************************************\nskipping: [localhost]\n\nTASK [set certificate group on host via container] *****************************\nskipping: [localhost]\n\nTASK [copy certificate from kolla directory to final location] *****************\nskipping: [localhost]\n\nTASK [send restart order to haproxy container] *********************************\nskipping: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/lib/haproxy)\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/heat)\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\n\nTASK [heat logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"d30ca3bda176434d31659e7379616dd162ddb246\", \"msg\": \"Destination directory /var/log/heat does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost] => (item=/var/log/containers/heat)\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/horizon)\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\n\nTASK [horizon logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ac324739761cb36b925d6e309482e26f7fe49b91\", \"msg\": \"Destination directory /var/log/horizon does not exist\"}\n...ignoring\n\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\nok: [localhost]\n\nTASK [Stop and disable iscsid.socket service] **********************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/keystone)\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\n\nTASK [keystone logs readme] ****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"910be882addb6df99267e9bd303f6d9bf658562e\", \"msg\": \"Destination directory /var/log/keystone does not exist\"}\n...ignoring\n\nTASK [memcached logs readme] ***************************************************\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/log/containers/mysql)\nok: [localhost] => (item=/var/lib/mysql)\n\nTASK [mysql logs readme] *******************************************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/neutron)\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\n\nTASK [neutron logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost] => (item=/var/log/containers/neutron)\n\nTASK [create /var/lib/neutron] *************************************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/nova)\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\n\nTASK [nova logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost] => (item=/var/log/containers/nova)\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\n\nTASK [NTP settings] ************************************************************\nok: [localhost]\n\nTASK [Install ntpdate] *********************************************************\nskipping: [localhost]\n\nTASK [Ensure system is NTP time synced] ****************************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/panko)\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\n\nTASK [panko logs readme] *******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"903397bbd82e9b1f53087e3d7e8975d851857ce2\", \"msg\": \"Destination directory /var/log/panko does not exist\"}\n...ignoring\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/lib/rabbitmq)\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\n\nTASK [rabbitmq logs readme] ****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ee241f2199f264c9d0f384cf389fe255e8bf8a77\", \"msg\": \"Destination directory /var/log/rabbitmq does not exist\"}\n...ignoring\n\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/lib/redis)\nchanged: [localhost] => (item=/var/log/containers/redis)\nok: [localhost] => (item=/var/run/redis)\n\nTASK [redis logs readme] *******************************************************\nchanged: [localhost]\n\nTASK [create /var/lib/sahara] **************************************************\nchanged: [localhost]\n\nTASK [create persistent sahara logs directory] *********************************\nchanged: [localhost]\n\nTASK [sahara logs readme] ******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b0212a1177fa4a88502d17a1cbc31198040cf047\", \"msg\": \"Destination directory /var/log/sahara does not exist\"}\n...ignoring\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/srv/node)\nchanged: [localhost] => (item=/var/log/swift)\n\nTASK [Create swift logging symlink] ********************************************\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/srv/node)\nok: [localhost] => (item=/var/log/swift)\nok: [localhost] => (item=/var/log/containers)\n\nTASK [Set swift_use_local_disks fact] ******************************************\nok: [localhost]\n\nTASK [Create Swift d1 directory if needed] *************************************\nchanged: [localhost]\n\nTASK [swift logs readme] *******************************************************\nchanged: [localhost]\n\nTASK [Format SwiftRawDisks] ****************************************************\n\nTASK [Mount devices defined in SwiftRawDisks] **********************************\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=61 changed=33 unreachable=0 failed=0 \n\n\n[2018-07-13 20:48:02,025] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/a1478809-bdf9-4392-929d-2976d31bc216_playbook.yaml\n\n[2018-07-13 20:48:02,029] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-07-13 20:48:02,030] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a1478809-bdf9-4392-929d-2976d31bc216.json < /var/lib/heat-config/deployed/a1478809-bdf9-4392-929d-2976d31bc216.notify.json\n[2018-07-13 20:48:02,439] (heat-config) [INFO] \n[2018-07-13 20:48:02,439] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:47:32,280] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/a1478809-bdf9-4392-929d-2976d31bc216.json", "[2018-07-13 20:48:02,029] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/aodh)\\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\\n\\nTASK [aodh logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b6cf6dbe054f430c33d39c1a1a88593536d6e659\\\", \\\"msg\\\": \\\"Destination directory /var/log/aodh does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/cinder)\\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\\n\\nTASK [cinder logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\\\", \\\"msg\\\": \\\"Destination directory /var/log/cinder does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/cinder)\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\nok: [localhost] => (item=/var/lib/cinder)\\n\\nTASK [cinder_enable_iscsi_backend fact] ****************************************\\nok: [localhost]\\n\\nTASK [cinder create LVM volume group dd] ***************************************\\nskipping: [localhost]\\n\\nTASK [cinder create LVM volume group] ******************************************\\nskipping: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/glance)\\n\\nTASK [glance logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"e368ae3272baeb19e1113009ea5dae00e797c919\\\", \\\"msg\\\": \\\"Destination directory /var/log/glance does not exist\\\"}\\n...ignoring\\n\\nTASK [set_fact] ****************************************************************\\nskipping: [localhost]\\n\\nTASK [file] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [stat] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [copy] ********************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \\n\\nTASK [mount] *******************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \\n\\nTASK [Mount NFS on host] *******************************************************\\nskipping: [localhost]\\n\\nTASK [Mount Node Staging Location] *********************************************\\nskipping: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\\n\\nTASK [gnocchi logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\\\", \\\"msg\\\": \\\"Destination directory /var/log/gnocchi does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [get parameters] **********************************************************\\nok: [localhost]\\n\\nTASK [get DeployedSSLCertificatePath attributes] *******************************\\nskipping: [localhost]\\n\\nTASK [Assign bootstrap node] ***************************************************\\nskipping: [localhost]\\n\\nTASK [set is_bootstrap_node fact] **********************************************\\nskipping: [localhost]\\n\\nTASK [get haproxy status] ******************************************************\\nskipping: [localhost]\\n\\nTASK [get pacemaker status] ****************************************************\\nskipping: [localhost]\\n\\nTASK [get docker status] *******************************************************\\nskipping: [localhost]\\n\\nTASK [get container_id] ********************************************************\\nskipping: [localhost]\\n\\nTASK [get pcs resource name for haproxy container] *****************************\\nskipping: [localhost]\\n\\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\\nskipping: [localhost]\\n\\nTASK [push certificate content] ************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate ownership] ***********************************************\\nskipping: [localhost]\\n\\nTASK [reload haproxy if enabled] ***********************************************\\nskipping: [localhost]\\n\\nTASK [restart pacemaker resource for haproxy] **********************************\\nskipping: [localhost]\\n\\nTASK [set kolla_dir fact] ******************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate group on host via container] *****************************\\nskipping: [localhost]\\n\\nTASK [copy certificate from kolla directory to final location] *****************\\nskipping: [localhost]\\n\\nTASK [send restart order to haproxy container] *********************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/haproxy)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\\n\\nTASK [heat logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"d30ca3bda176434d31659e7379616dd162ddb246\\\", \\\"msg\\\": \\\"Destination directory /var/log/heat does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/horizon)\\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\\n\\nTASK [horizon logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ac324739761cb36b925d6e309482e26f7fe49b91\\\", \\\"msg\\\": \\\"Destination directory /var/log/horizon does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/keystone)\\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\\n\\nTASK [keystone logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"910be882addb6df99267e9bd303f6d9bf658562e\\\", \\\"msg\\\": \\\"Destination directory /var/log/keystone does not exist\\\"}\\n...ignoring\\n\\nTASK [memcached logs readme] ***************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/log/containers/mysql)\\nok: [localhost] => (item=/var/lib/mysql)\\n\\nTASK [mysql logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [create /var/lib/neutron] *************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\\n\\nTASK [NTP settings] ************************************************************\\nok: [localhost]\\n\\nTASK [Install ntpdate] *********************************************************\\nskipping: [localhost]\\n\\nTASK [Ensure system is NTP time synced] ****************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/panko)\\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\\n\\nTASK [panko logs readme] *******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"903397bbd82e9b1f53087e3d7e8975d851857ce2\\\", \\\"msg\\\": \\\"Destination directory /var/log/panko does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/rabbitmq)\\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\\n\\nTASK [rabbitmq logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ee241f2199f264c9d0f384cf389fe255e8bf8a77\\\", \\\"msg\\\": \\\"Destination directory /var/log/rabbitmq does not exist\\\"}\\n...ignoring\\n\\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/redis)\\nchanged: [localhost] => (item=/var/log/containers/redis)\\nok: [localhost] => (item=/var/run/redis)\\n\\nTASK [redis logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create /var/lib/sahara] **************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent sahara logs directory] *********************************\\nchanged: [localhost]\\n\\nTASK [sahara logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b0212a1177fa4a88502d17a1cbc31198040cf047\\\", \\\"msg\\\": \\\"Destination directory /var/log/sahara does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/srv/node)\\nchanged: [localhost] => (item=/var/log/swift)\\n\\nTASK [Create swift logging symlink] ********************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/srv/node)\\nok: [localhost] => (item=/var/log/swift)\\nok: [localhost] => (item=/var/log/containers)\\n\\nTASK [Set swift_use_local_disks fact] ******************************************\\nok: [localhost]\\n\\nTASK [Create Swift d1 directory if needed] *************************************\\nchanged: [localhost]\\n\\nTASK [swift logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [Format SwiftRawDisks] ****************************************************\\n\\nTASK [Mount devices defined in SwiftRawDisks] **********************************\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=61 changed=33 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-07-13 20:48:02,029] (heat-config) [DEBUG] [2018-07-13 20:47:32,302] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/a1478809-bdf9-4392-929d-2976d31bc216_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/a1478809-bdf9-4392-929d-2976d31bc216_variables.json", "[2018-07-13 20:48:02,025] (heat-config) [INFO] Return code 0", "[2018-07-13 20:48:02,025] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/aodh)", "changed: [localhost] => (item=/var/log/containers/httpd/aodh-api)", "", "TASK [aodh logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b6cf6dbe054f430c33d39c1a1a88593536d6e659\", \"msg\": \"Destination directory /var/log/aodh does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [ceilometer logs readme] **************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/cinder)", "changed: [localhost] => (item=/var/log/containers/httpd/cinder-api)", "", "TASK [cinder logs readme] ******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\", \"msg\": \"Destination directory /var/log/cinder does not exist\"}", "...ignoring", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/lib/cinder)", "ok: [localhost] => (item=/var/log/containers/cinder)", "", "TASK [ensure ceph configurations exist] ****************************************", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/log/containers/cinder)", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/log/containers/cinder)", "ok: [localhost] => (item=/var/lib/cinder)", "", "TASK [cinder_enable_iscsi_backend fact] ****************************************", "ok: [localhost]", "", "TASK [cinder create LVM volume group dd] ***************************************", "skipping: [localhost]", "", "TASK [cinder create LVM volume group] ******************************************", "skipping: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/glance)", "", "TASK [glance logs readme] ******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"e368ae3272baeb19e1113009ea5dae00e797c919\", \"msg\": \"Destination directory /var/log/glance does not exist\"}", "...ignoring", "", "TASK [set_fact] ****************************************************************", "skipping: [localhost]", "", "TASK [file] ********************************************************************", "skipping: [localhost]", "", "TASK [stat] ********************************************************************", "skipping: [localhost]", "", "TASK [copy] ********************************************************************", "skipping: [localhost] => (item={u'NETAPP_SHARE': u''}) ", "", "TASK [mount] *******************************************************************", "skipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) ", "", "TASK [Mount NFS on host] *******************************************************", "skipping: [localhost]", "", "TASK [Mount Node Staging Location] *********************************************", "skipping: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/gnocchi)", "changed: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)", "", "TASK [gnocchi logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\", \"msg\": \"Destination directory /var/log/gnocchi does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [get parameters] **********************************************************", "ok: [localhost]", "", "TASK [get DeployedSSLCertificatePath attributes] *******************************", "skipping: [localhost]", "", "TASK [Assign bootstrap node] ***************************************************", "skipping: [localhost]", "", "TASK [set is_bootstrap_node fact] **********************************************", "skipping: [localhost]", "", "TASK [get haproxy status] ******************************************************", "skipping: [localhost]", "", "TASK [get pacemaker status] ****************************************************", "skipping: [localhost]", "", "TASK [get docker status] *******************************************************", "skipping: [localhost]", "", "TASK [get container_id] ********************************************************", "skipping: [localhost]", "", "TASK [get pcs resource name for haproxy container] *****************************", "skipping: [localhost]", "", "TASK [remove DeployedSSLCertificatePath if is dir] *****************************", "skipping: [localhost]", "", "TASK [push certificate content] ************************************************", "skipping: [localhost]", "", "TASK [set certificate ownership] ***********************************************", "skipping: [localhost]", "", "TASK [reload haproxy if enabled] ***********************************************", "skipping: [localhost]", "", "TASK [restart pacemaker resource for haproxy] **********************************", "skipping: [localhost]", "", "TASK [set kolla_dir fact] ******************************************************", "skipping: [localhost]", "", "TASK [set certificate group on host via container] *****************************", "skipping: [localhost]", "", "TASK [copy certificate from kolla directory to final location] *****************", "skipping: [localhost]", "", "TASK [send restart order to haproxy container] *********************************", "skipping: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/lib/haproxy)", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/heat)", "changed: [localhost] => (item=/var/log/containers/httpd/heat-api)", "", "TASK [heat logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"d30ca3bda176434d31659e7379616dd162ddb246\", \"msg\": \"Destination directory /var/log/heat does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost] => (item=/var/log/containers/heat)", "changed: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/horizon)", "changed: [localhost] => (item=/var/log/containers/httpd/horizon)", "", "TASK [horizon logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ac324739761cb36b925d6e309482e26f7fe49b91\", \"msg\": \"Destination directory /var/log/horizon does not exist\"}", "...ignoring", "", "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", "ok: [localhost]", "", "TASK [Stop and disable iscsid.socket service] **********************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/keystone)", "changed: [localhost] => (item=/var/log/containers/httpd/keystone)", "", "TASK [keystone logs readme] ****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"910be882addb6df99267e9bd303f6d9bf658562e\", \"msg\": \"Destination directory /var/log/keystone does not exist\"}", "...ignoring", "", "TASK [memcached logs readme] ***************************************************", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/log/containers/mysql)", "ok: [localhost] => (item=/var/lib/mysql)", "", "TASK [mysql logs readme] *******************************************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/neutron)", "changed: [localhost] => (item=/var/log/containers/httpd/neutron-api)", "", "TASK [neutron logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost] => (item=/var/log/containers/neutron)", "", "TASK [create /var/lib/neutron] *************************************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/nova)", "changed: [localhost] => (item=/var/log/containers/httpd/nova-api)", "", "TASK [nova logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost] => (item=/var/log/containers/nova)", "changed: [localhost] => (item=/var/log/containers/httpd/nova-placement)", "", "TASK [NTP settings] ************************************************************", "ok: [localhost]", "", "TASK [Install ntpdate] *********************************************************", "skipping: [localhost]", "", "TASK [Ensure system is NTP time synced] ****************************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/panko)", "changed: [localhost] => (item=/var/log/containers/httpd/panko-api)", "", "TASK [panko logs readme] *******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"903397bbd82e9b1f53087e3d7e8975d851857ce2\", \"msg\": \"Destination directory /var/log/panko does not exist\"}", "...ignoring", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/lib/rabbitmq)", "changed: [localhost] => (item=/var/log/containers/rabbitmq)", "", "TASK [rabbitmq logs readme] ****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ee241f2199f264c9d0f384cf389fe255e8bf8a77\", \"msg\": \"Destination directory /var/log/rabbitmq does not exist\"}", "...ignoring", "", "TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/lib/redis)", "changed: [localhost] => (item=/var/log/containers/redis)", "ok: [localhost] => (item=/var/run/redis)", "", "TASK [redis logs readme] *******************************************************", "changed: [localhost]", "", "TASK [create /var/lib/sahara] **************************************************", "changed: [localhost]", "", "TASK [create persistent sahara logs directory] *********************************", "changed: [localhost]", "", "TASK [sahara logs readme] ******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b0212a1177fa4a88502d17a1cbc31198040cf047\", \"msg\": \"Destination directory /var/log/sahara does not exist\"}", "...ignoring", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/srv/node)", "changed: [localhost] => (item=/var/log/swift)", "", "TASK [Create swift logging symlink] ********************************************", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/srv/node)", "ok: [localhost] => (item=/var/log/swift)", "ok: [localhost] => (item=/var/log/containers)", "", "TASK [Set swift_use_local_disks fact] ******************************************", "ok: [localhost]", "", "TASK [Create Swift d1 directory if needed] *************************************", "changed: [localhost]", "", "TASK [swift logs readme] *******************************************************", "changed: [localhost]", "", "TASK [Format SwiftRawDisks] ****************************************************", "", "TASK [Mount devices defined in SwiftRawDisks] **********************************", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=61 changed=33 unreachable=0 failed=0 ", "", "", "[2018-07-13 20:48:02,025] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/a1478809-bdf9-4392-929d-2976d31bc216_playbook.yaml", "", "[2018-07-13 20:48:02,029] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-07-13 20:48:02,030] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a1478809-bdf9-4392-929d-2976d31bc216.json < /var/lib/heat-config/deployed/a1478809-bdf9-4392-929d-2976d31bc216.notify.json", "[2018-07-13 20:48:02,439] (heat-config) [INFO] ", "[2018-07-13 20:48:02,439] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:48:01,978 p=5867 u=mistral | TASK [Output for ControllerHostPrepDeployment] ********************************* >2018-07-13 20:48:01,979 p=5867 u=mistral | Friday 13 July 2018 20:48:01 -0400 (0:00:29.652) 0:01:25.166 *********** >2018-07-13 20:48:02,042 p=5867 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:47:32,280] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/a1478809-bdf9-4392-929d-2976d31bc216.json", > "[2018-07-13 20:48:02,029] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/aodh)\\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\\n\\nTASK [aodh logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b6cf6dbe054f430c33d39c1a1a88593536d6e659\\\", \\\"msg\\\": \\\"Destination directory /var/log/aodh does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/cinder)\\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\\n\\nTASK [cinder logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\\\", \\\"msg\\\": \\\"Destination directory /var/log/cinder does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/cinder)\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\nok: [localhost] => (item=/var/lib/cinder)\\n\\nTASK [cinder_enable_iscsi_backend fact] ****************************************\\nok: [localhost]\\n\\nTASK [cinder create LVM volume group dd] ***************************************\\nskipping: [localhost]\\n\\nTASK [cinder create LVM volume group] ******************************************\\nskipping: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/glance)\\n\\nTASK [glance logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"e368ae3272baeb19e1113009ea5dae00e797c919\\\", \\\"msg\\\": \\\"Destination directory /var/log/glance does not exist\\\"}\\n...ignoring\\n\\nTASK [set_fact] ****************************************************************\\nskipping: [localhost]\\n\\nTASK [file] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [stat] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [copy] ********************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \\n\\nTASK [mount] *******************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \\n\\nTASK [Mount NFS on host] *******************************************************\\nskipping: [localhost]\\n\\nTASK [Mount Node Staging Location] *********************************************\\nskipping: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\\n\\nTASK [gnocchi logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\\\", \\\"msg\\\": \\\"Destination directory /var/log/gnocchi does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [get parameters] **********************************************************\\nok: [localhost]\\n\\nTASK [get DeployedSSLCertificatePath attributes] *******************************\\nskipping: [localhost]\\n\\nTASK [Assign bootstrap node] ***************************************************\\nskipping: [localhost]\\n\\nTASK [set is_bootstrap_node fact] **********************************************\\nskipping: [localhost]\\n\\nTASK [get haproxy status] ******************************************************\\nskipping: [localhost]\\n\\nTASK [get pacemaker status] ****************************************************\\nskipping: [localhost]\\n\\nTASK [get docker status] *******************************************************\\nskipping: [localhost]\\n\\nTASK [get container_id] ********************************************************\\nskipping: [localhost]\\n\\nTASK [get pcs resource name for haproxy container] *****************************\\nskipping: [localhost]\\n\\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\\nskipping: [localhost]\\n\\nTASK [push certificate content] ************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate ownership] ***********************************************\\nskipping: [localhost]\\n\\nTASK [reload haproxy if enabled] ***********************************************\\nskipping: [localhost]\\n\\nTASK [restart pacemaker resource for haproxy] **********************************\\nskipping: [localhost]\\n\\nTASK [set kolla_dir fact] ******************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate group on host via container] *****************************\\nskipping: [localhost]\\n\\nTASK [copy certificate from kolla directory to final location] *****************\\nskipping: [localhost]\\n\\nTASK [send restart order to haproxy container] *********************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/haproxy)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\\n\\nTASK [heat logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"d30ca3bda176434d31659e7379616dd162ddb246\\\", \\\"msg\\\": \\\"Destination directory /var/log/heat does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/horizon)\\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\\n\\nTASK [horizon logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ac324739761cb36b925d6e309482e26f7fe49b91\\\", \\\"msg\\\": \\\"Destination directory /var/log/horizon does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/keystone)\\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\\n\\nTASK [keystone logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"910be882addb6df99267e9bd303f6d9bf658562e\\\", \\\"msg\\\": \\\"Destination directory /var/log/keystone does not exist\\\"}\\n...ignoring\\n\\nTASK [memcached logs readme] ***************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/log/containers/mysql)\\nok: [localhost] => (item=/var/lib/mysql)\\n\\nTASK [mysql logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [create /var/lib/neutron] *************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\\n\\nTASK [NTP settings] ************************************************************\\nok: [localhost]\\n\\nTASK [Install ntpdate] *********************************************************\\nskipping: [localhost]\\n\\nTASK [Ensure system is NTP time synced] ****************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/panko)\\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\\n\\nTASK [panko logs readme] *******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"903397bbd82e9b1f53087e3d7e8975d851857ce2\\\", \\\"msg\\\": \\\"Destination directory /var/log/panko does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/rabbitmq)\\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\\n\\nTASK [rabbitmq logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ee241f2199f264c9d0f384cf389fe255e8bf8a77\\\", \\\"msg\\\": \\\"Destination directory /var/log/rabbitmq does not exist\\\"}\\n...ignoring\\n\\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/redis)\\nchanged: [localhost] => (item=/var/log/containers/redis)\\nok: [localhost] => (item=/var/run/redis)\\n\\nTASK [redis logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create /var/lib/sahara] **************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent sahara logs directory] *********************************\\nchanged: [localhost]\\n\\nTASK [sahara logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b0212a1177fa4a88502d17a1cbc31198040cf047\\\", \\\"msg\\\": \\\"Destination directory /var/log/sahara does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/srv/node)\\nchanged: [localhost] => (item=/var/log/swift)\\n\\nTASK [Create swift logging symlink] ********************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/srv/node)\\nok: [localhost] => (item=/var/log/swift)\\nok: [localhost] => (item=/var/log/containers)\\n\\nTASK [Set swift_use_local_disks fact] ******************************************\\nok: [localhost]\\n\\nTASK [Create Swift d1 directory if needed] *************************************\\nchanged: [localhost]\\n\\nTASK [swift logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [Format SwiftRawDisks] ****************************************************\\n\\nTASK [Mount devices defined in SwiftRawDisks] **********************************\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=61 changed=33 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-07-13 20:48:02,029] (heat-config) [DEBUG] [2018-07-13 20:47:32,302] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/a1478809-bdf9-4392-929d-2976d31bc216_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/a1478809-bdf9-4392-929d-2976d31bc216_variables.json", > "[2018-07-13 20:48:02,025] (heat-config) [INFO] Return code 0", > "[2018-07-13 20:48:02,025] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/aodh)", > "changed: [localhost] => (item=/var/log/containers/httpd/aodh-api)", > "", > "TASK [aodh logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b6cf6dbe054f430c33d39c1a1a88593536d6e659\", \"msg\": \"Destination directory /var/log/aodh does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [ceilometer logs readme] **************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/cinder)", > "changed: [localhost] => (item=/var/log/containers/httpd/cinder-api)", > "", > "TASK [cinder logs readme] ******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\", \"msg\": \"Destination directory /var/log/cinder does not exist\"}", > "...ignoring", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/lib/cinder)", > "ok: [localhost] => (item=/var/log/containers/cinder)", > "", > "TASK [ensure ceph configurations exist] ****************************************", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/log/containers/cinder)", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/log/containers/cinder)", > "ok: [localhost] => (item=/var/lib/cinder)", > "", > "TASK [cinder_enable_iscsi_backend fact] ****************************************", > "ok: [localhost]", > "", > "TASK [cinder create LVM volume group dd] ***************************************", > "skipping: [localhost]", > "", > "TASK [cinder create LVM volume group] ******************************************", > "skipping: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/glance)", > "", > "TASK [glance logs readme] ******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"e368ae3272baeb19e1113009ea5dae00e797c919\", \"msg\": \"Destination directory /var/log/glance does not exist\"}", > "...ignoring", > "", > "TASK [set_fact] ****************************************************************", > "skipping: [localhost]", > "", > "TASK [file] ********************************************************************", > "skipping: [localhost]", > "", > "TASK [stat] ********************************************************************", > "skipping: [localhost]", > "", > "TASK [copy] ********************************************************************", > "skipping: [localhost] => (item={u'NETAPP_SHARE': u''}) ", > "", > "TASK [mount] *******************************************************************", > "skipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) ", > "", > "TASK [Mount NFS on host] *******************************************************", > "skipping: [localhost]", > "", > "TASK [Mount Node Staging Location] *********************************************", > "skipping: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/gnocchi)", > "changed: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)", > "", > "TASK [gnocchi logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\", \"msg\": \"Destination directory /var/log/gnocchi does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [get parameters] **********************************************************", > "ok: [localhost]", > "", > "TASK [get DeployedSSLCertificatePath attributes] *******************************", > "skipping: [localhost]", > "", > "TASK [Assign bootstrap node] ***************************************************", > "skipping: [localhost]", > "", > "TASK [set is_bootstrap_node fact] **********************************************", > "skipping: [localhost]", > "", > "TASK [get haproxy status] ******************************************************", > "skipping: [localhost]", > "", > "TASK [get pacemaker status] ****************************************************", > "skipping: [localhost]", > "", > "TASK [get docker status] *******************************************************", > "skipping: [localhost]", > "", > "TASK [get container_id] ********************************************************", > "skipping: [localhost]", > "", > "TASK [get pcs resource name for haproxy container] *****************************", > "skipping: [localhost]", > "", > "TASK [remove DeployedSSLCertificatePath if is dir] *****************************", > "skipping: [localhost]", > "", > "TASK [push certificate content] ************************************************", > "skipping: [localhost]", > "", > "TASK [set certificate ownership] ***********************************************", > "skipping: [localhost]", > "", > "TASK [reload haproxy if enabled] ***********************************************", > "skipping: [localhost]", > "", > "TASK [restart pacemaker resource for haproxy] **********************************", > "skipping: [localhost]", > "", > "TASK [set kolla_dir fact] ******************************************************", > "skipping: [localhost]", > "", > "TASK [set certificate group on host via container] *****************************", > "skipping: [localhost]", > "", > "TASK [copy certificate from kolla directory to final location] *****************", > "skipping: [localhost]", > "", > "TASK [send restart order to haproxy container] *********************************", > "skipping: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/lib/haproxy)", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/heat)", > "changed: [localhost] => (item=/var/log/containers/httpd/heat-api)", > "", > "TASK [heat logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"d30ca3bda176434d31659e7379616dd162ddb246\", \"msg\": \"Destination directory /var/log/heat does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost] => (item=/var/log/containers/heat)", > "changed: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/horizon)", > "changed: [localhost] => (item=/var/log/containers/httpd/horizon)", > "", > "TASK [horizon logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ac324739761cb36b925d6e309482e26f7fe49b91\", \"msg\": \"Destination directory /var/log/horizon does not exist\"}", > "...ignoring", > "", > "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", > "ok: [localhost]", > "", > "TASK [Stop and disable iscsid.socket service] **********************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/keystone)", > "changed: [localhost] => (item=/var/log/containers/httpd/keystone)", > "", > "TASK [keystone logs readme] ****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"910be882addb6df99267e9bd303f6d9bf658562e\", \"msg\": \"Destination directory /var/log/keystone does not exist\"}", > "...ignoring", > "", > "TASK [memcached logs readme] ***************************************************", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/log/containers/mysql)", > "ok: [localhost] => (item=/var/lib/mysql)", > "", > "TASK [mysql logs readme] *******************************************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/neutron)", > "changed: [localhost] => (item=/var/log/containers/httpd/neutron-api)", > "", > "TASK [neutron logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost] => (item=/var/log/containers/neutron)", > "", > "TASK [create /var/lib/neutron] *************************************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/nova)", > "changed: [localhost] => (item=/var/log/containers/httpd/nova-api)", > "", > "TASK [nova logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost] => (item=/var/log/containers/nova)", > "changed: [localhost] => (item=/var/log/containers/httpd/nova-placement)", > "", > "TASK [NTP settings] ************************************************************", > "ok: [localhost]", > "", > "TASK [Install ntpdate] *********************************************************", > "skipping: [localhost]", > "", > "TASK [Ensure system is NTP time synced] ****************************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/panko)", > "changed: [localhost] => (item=/var/log/containers/httpd/panko-api)", > "", > "TASK [panko logs readme] *******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"903397bbd82e9b1f53087e3d7e8975d851857ce2\", \"msg\": \"Destination directory /var/log/panko does not exist\"}", > "...ignoring", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/lib/rabbitmq)", > "changed: [localhost] => (item=/var/log/containers/rabbitmq)", > "", > "TASK [rabbitmq logs readme] ****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ee241f2199f264c9d0f384cf389fe255e8bf8a77\", \"msg\": \"Destination directory /var/log/rabbitmq does not exist\"}", > "...ignoring", > "", > "TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/lib/redis)", > "changed: [localhost] => (item=/var/log/containers/redis)", > "ok: [localhost] => (item=/var/run/redis)", > "", > "TASK [redis logs readme] *******************************************************", > "changed: [localhost]", > "", > "TASK [create /var/lib/sahara] **************************************************", > "changed: [localhost]", > "", > "TASK [create persistent sahara logs directory] *********************************", > "changed: [localhost]", > "", > "TASK [sahara logs readme] ******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b0212a1177fa4a88502d17a1cbc31198040cf047\", \"msg\": \"Destination directory /var/log/sahara does not exist\"}", > "...ignoring", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/srv/node)", > "changed: [localhost] => (item=/var/log/swift)", > "", > "TASK [Create swift logging symlink] ********************************************", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/srv/node)", > "ok: [localhost] => (item=/var/log/swift)", > "ok: [localhost] => (item=/var/log/containers)", > "", > "TASK [Set swift_use_local_disks fact] ******************************************", > "ok: [localhost]", > "", > "TASK [Create Swift d1 directory if needed] *************************************", > "changed: [localhost]", > "", > "TASK [swift logs readme] *******************************************************", > "changed: [localhost]", > "", > "TASK [Format SwiftRawDisks] ****************************************************", > "", > "TASK [Mount devices defined in SwiftRawDisks] **********************************", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=61 changed=33 unreachable=0 failed=0 ", > "", > "", > "[2018-07-13 20:48:02,025] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/a1478809-bdf9-4392-929d-2976d31bc216_playbook.yaml", > "", > "[2018-07-13 20:48:02,029] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-07-13 20:48:02,030] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a1478809-bdf9-4392-929d-2976d31bc216.json < /var/lib/heat-config/deployed/a1478809-bdf9-4392-929d-2976d31bc216.notify.json", > "[2018-07-13 20:48:02,439] (heat-config) [INFO] ", > "[2018-07-13 20:48:02,439] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:48:02,076 p=5867 u=mistral | TASK [Check-mode for Run deployment ControllerHostPrepDeployment] ************** >2018-07-13 20:48:02,076 p=5867 u=mistral | Friday 13 July 2018 20:48:02 -0400 (0:00:00.097) 0:01:25.264 *********** >2018-07-13 20:48:02,092 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:02,115 p=5867 u=mistral | TASK [include_tasks] *********************************************************** >2018-07-13 20:48:02,115 p=5867 u=mistral | Friday 13 July 2018 20:48:02 -0400 (0:00:00.038) 0:01:25.303 *********** >2018-07-13 20:48:02,350 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/Compute/deployments.yaml for compute-0 >2018-07-13 20:48:02,358 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/Compute/deployments.yaml for compute-0 >2018-07-13 20:48:02,366 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/Compute/deployments.yaml for compute-0 >2018-07-13 20:48:02,374 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/Compute/deployments.yaml for compute-0 >2018-07-13 20:48:02,383 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/Compute/deployments.yaml for compute-0 >2018-07-13 20:48:02,391 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/Compute/deployments.yaml for compute-0 >2018-07-13 20:48:02,399 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/Compute/deployments.yaml for compute-0 >2018-07-13 20:48:02,407 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/Compute/deployments.yaml for compute-0 >2018-07-13 20:48:02,448 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:48:02,449 p=5867 u=mistral | Friday 13 July 2018 20:48:02 -0400 (0:00:00.333) 0:01:25.637 *********** >2018-07-13 20:48:02,558 p=5867 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "b1121018-c3ca-4607-b9f5-0eb38dd83734"}, "changed": false} >2018-07-13 20:48:02,579 p=5867 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-07-13 20:48:02,579 p=5867 u=mistral | Friday 13 July 2018 20:48:02 -0400 (0:00:00.130) 0:01:25.767 *********** >2018-07-13 20:48:03,291 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "44f1a1ffb689a2404187a6c8f47a29a35caacace", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-b1121018-c3ca-4607-b9f5-0eb38dd83734", "gid": 0, "group": "root", "md5sum": "b7fe6c214a1b7b2571dba8eb7a56b493", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9256, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529282.7-43775032955024/source", "state": "file", "uid": 0} >2018-07-13 20:48:03,311 p=5867 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-07-13 20:48:03,311 p=5867 u=mistral | Friday 13 July 2018 20:48:03 -0400 (0:00:00.732) 0:01:26.499 *********** >2018-07-13 20:48:03,657 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:48:03,679 p=5867 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-07-13 20:48:03,679 p=5867 u=mistral | Friday 13 July 2018 20:48:03 -0400 (0:00:00.368) 0:01:26.867 *********** >2018-07-13 20:48:03,697 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:03,717 p=5867 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-07-13 20:48:03,717 p=5867 u=mistral | Friday 13 July 2018 20:48:03 -0400 (0:00:00.037) 0:01:26.905 *********** >2018-07-13 20:48:03,735 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:03,755 p=5867 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-07-13 20:48:03,755 p=5867 u=mistral | Friday 13 July 2018 20:48:03 -0400 (0:00:00.038) 0:01:26.943 *********** >2018-07-13 20:48:03,772 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:03,792 p=5867 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-07-13 20:48:03,792 p=5867 u=mistral | Friday 13 July 2018 20:48:03 -0400 (0:00:00.036) 0:01:26.980 *********** >2018-07-13 20:48:23,952 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/b1121018-c3ca-4607-b9f5-0eb38dd83734.notify.json)", "delta": "0:00:19.750152", "end": "2018-07-13 20:48:24.212645", "rc": 0, "start": "2018-07-13 20:48:04.462493", "stderr": "[2018-07-13 20:48:04,489] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/b1121018-c3ca-4607-b9f5-0eb38dd83734.json\n[2018-07-13 20:48:23,780] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/07/13 08:48:04 PM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/07/13 08:48:04 PM] [INFO] Ifcfg net config provider created.\\n[2018/07/13 08:48:04 PM] [INFO] Not using any mapping file.\\n[2018/07/13 08:48:05 PM] [INFO] Finding active nics\\n[2018/07/13 08:48:05 PM] [INFO] eth2 is an embedded active nic\\n[2018/07/13 08:48:05 PM] [INFO] eth1 is an embedded active nic\\n[2018/07/13 08:48:05 PM] [INFO] eth0 is an embedded active nic\\n[2018/07/13 08:48:05 PM] [INFO] lo is not an active nic\\n[2018/07/13 08:48:05 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/07/13 08:48:05 PM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/07/13 08:48:05 PM] [INFO] nic3 mapped to: eth2\\n[2018/07/13 08:48:05 PM] [INFO] nic2 mapped to: eth1\\n[2018/07/13 08:48:05 PM] [INFO] nic1 mapped to: eth0\\n[2018/07/13 08:48:05 PM] [INFO] adding interface: eth0\\n[2018/07/13 08:48:05 PM] [INFO] adding custom route for interface: eth0\\n[2018/07/13 08:48:05 PM] [INFO] adding bridge: br-isolated\\n[2018/07/13 08:48:05 PM] [INFO] adding interface: eth1\\n[2018/07/13 08:48:05 PM] [INFO] adding vlan: vlan20\\n[2018/07/13 08:48:05 PM] [INFO] adding vlan: vlan30\\n[2018/07/13 08:48:05 PM] [INFO] adding vlan: vlan50\\n[2018/07/13 08:48:05 PM] [INFO] adding interface: eth2\\n[2018/07/13 08:48:05 PM] [INFO] applying network configs...\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan20\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan30\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan50\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: eth2\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: eth1\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: eth0\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan20\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan30\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan50\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on bridge: br-isolated\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/07/13 08:48:05 PM] [INFO] running ifup on bridge: br-isolated\\n[2018/07/13 08:48:05 PM] [INFO] running ifup on interface: eth2\\n[2018/07/13 08:48:05 PM] [INFO] running ifup on interface: eth1\\n[2018/07/13 08:48:06 PM] [INFO] running ifup on interface: eth0\\n[2018/07/13 08:48:10 PM] [INFO] running ifup on interface: vlan20\\n[2018/07/13 08:48:14 PM] [INFO] running ifup on interface: vlan30\\n[2018/07/13 08:48:18 PM] [INFO] running ifup on interface: vlan50\\n[2018/07/13 08:48:22 PM] [INFO] running ifup on interface: vlan20\\n[2018/07/13 08:48:23 PM] [INFO] running ifup on interface: vlan30\\n[2018/07/13 08:48:23 PM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-07-13 20:48:23,780] (heat-config) [DEBUG] [2018-07-13 20:48:04,512] (heat-config) [INFO] interface_name=nic1\n[2018-07-13 20:48:04,512] (heat-config) [INFO] bridge_name=br-ex\n[2018-07-13 20:48:04,512] (heat-config) [INFO] deploy_server_id=99a8e115-a0a1-4b89-8099-f4376943e467\n[2018-07-13 20:48:04,512] (heat-config) [INFO] deploy_action=CREATE\n[2018-07-13 20:48:04,512] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-ccm2x5gmqhvh-0-5fuht4b3gazf-NetworkDeployment-hmbd54sj6xp4-TripleOSoftwareDeployment-ekatt7jr24r6/5094d2c1-8ef9-4550-92bf-b7b3502358e9\n[2018-07-13 20:48:04,512] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-07-13 20:48:04,512] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-07-13 20:48:04,513] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/b1121018-c3ca-4607-b9f5-0eb38dd83734\n[2018-07-13 20:48:23,775] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS\n\n[2018-07-13 20:48:23,775] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/07/13 08:48:04 PM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/07/13 08:48:04 PM] [INFO] Ifcfg net config provider created.\n[2018/07/13 08:48:04 PM] [INFO] Not using any mapping file.\n[2018/07/13 08:48:05 PM] [INFO] Finding active nics\n[2018/07/13 08:48:05 PM] [INFO] eth2 is an embedded active nic\n[2018/07/13 08:48:05 PM] [INFO] eth1 is an embedded active nic\n[2018/07/13 08:48:05 PM] [INFO] eth0 is an embedded active nic\n[2018/07/13 08:48:05 PM] [INFO] lo is not an active nic\n[2018/07/13 08:48:05 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/07/13 08:48:05 PM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/07/13 08:48:05 PM] [INFO] nic3 mapped to: eth2\n[2018/07/13 08:48:05 PM] [INFO] nic2 mapped to: eth1\n[2018/07/13 08:48:05 PM] [INFO] nic1 mapped to: eth0\n[2018/07/13 08:48:05 PM] [INFO] adding interface: eth0\n[2018/07/13 08:48:05 PM] [INFO] adding custom route for interface: eth0\n[2018/07/13 08:48:05 PM] [INFO] adding bridge: br-isolated\n[2018/07/13 08:48:05 PM] [INFO] adding interface: eth1\n[2018/07/13 08:48:05 PM] [INFO] adding vlan: vlan20\n[2018/07/13 08:48:05 PM] [INFO] adding vlan: vlan30\n[2018/07/13 08:48:05 PM] [INFO] adding vlan: vlan50\n[2018/07/13 08:48:05 PM] [INFO] adding interface: eth2\n[2018/07/13 08:48:05 PM] [INFO] applying network configs...\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan20\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan30\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan50\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: eth2\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: eth1\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: eth0\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan20\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan30\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan50\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on bridge: br-isolated\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/07/13 08:48:05 PM] [INFO] running ifup on bridge: br-isolated\n[2018/07/13 08:48:05 PM] [INFO] running ifup on interface: eth2\n[2018/07/13 08:48:05 PM] [INFO] running ifup on interface: eth1\n[2018/07/13 08:48:06 PM] [INFO] running ifup on interface: eth0\n[2018/07/13 08:48:10 PM] [INFO] running ifup on interface: vlan20\n[2018/07/13 08:48:14 PM] [INFO] running ifup on interface: vlan30\n[2018/07/13 08:48:18 PM] [INFO] running ifup on interface: vlan50\n[2018/07/13 08:48:22 PM] [INFO] running ifup on interface: vlan20\n[2018/07/13 08:48:23 PM] [INFO] running ifup on interface: vlan30\n[2018/07/13 08:48:23 PM] [INFO] running ifup on interface: vlan50\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.3\n++ '[' -n 192.168.24.3 ']'\n++ break\n++ echo 192.168.24.3\n+ local METADATA_IP=192.168.24.3\n+ '[' -n 192.168.24.3 ']'\n+ is_local_ip 192.168.24.3\n+ local IP_TO_CHECK=192.168.24.3\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.3/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\n+ _ping=ping\n+ [[ 192.168.24.3 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.3\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-07-13 20:48:23,775] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/b1121018-c3ca-4607-b9f5-0eb38dd83734\n\n[2018-07-13 20:48:23,780] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-07-13 20:48:23,781] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b1121018-c3ca-4607-b9f5-0eb38dd83734.json < /var/lib/heat-config/deployed/b1121018-c3ca-4607-b9f5-0eb38dd83734.notify.json\n[2018-07-13 20:48:24,205] (heat-config) [INFO] \n[2018-07-13 20:48:24,205] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:48:04,489] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/b1121018-c3ca-4607-b9f5-0eb38dd83734.json", "[2018-07-13 20:48:23,780] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/07/13 08:48:04 PM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/07/13 08:48:04 PM] [INFO] Ifcfg net config provider created.\\n[2018/07/13 08:48:04 PM] [INFO] Not using any mapping file.\\n[2018/07/13 08:48:05 PM] [INFO] Finding active nics\\n[2018/07/13 08:48:05 PM] [INFO] eth2 is an embedded active nic\\n[2018/07/13 08:48:05 PM] [INFO] eth1 is an embedded active nic\\n[2018/07/13 08:48:05 PM] [INFO] eth0 is an embedded active nic\\n[2018/07/13 08:48:05 PM] [INFO] lo is not an active nic\\n[2018/07/13 08:48:05 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/07/13 08:48:05 PM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/07/13 08:48:05 PM] [INFO] nic3 mapped to: eth2\\n[2018/07/13 08:48:05 PM] [INFO] nic2 mapped to: eth1\\n[2018/07/13 08:48:05 PM] [INFO] nic1 mapped to: eth0\\n[2018/07/13 08:48:05 PM] [INFO] adding interface: eth0\\n[2018/07/13 08:48:05 PM] [INFO] adding custom route for interface: eth0\\n[2018/07/13 08:48:05 PM] [INFO] adding bridge: br-isolated\\n[2018/07/13 08:48:05 PM] [INFO] adding interface: eth1\\n[2018/07/13 08:48:05 PM] [INFO] adding vlan: vlan20\\n[2018/07/13 08:48:05 PM] [INFO] adding vlan: vlan30\\n[2018/07/13 08:48:05 PM] [INFO] adding vlan: vlan50\\n[2018/07/13 08:48:05 PM] [INFO] adding interface: eth2\\n[2018/07/13 08:48:05 PM] [INFO] applying network configs...\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan20\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan30\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan50\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: eth2\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: eth1\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: eth0\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan20\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan30\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan50\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on bridge: br-isolated\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/07/13 08:48:05 PM] [INFO] running ifup on bridge: br-isolated\\n[2018/07/13 08:48:05 PM] [INFO] running ifup on interface: eth2\\n[2018/07/13 08:48:05 PM] [INFO] running ifup on interface: eth1\\n[2018/07/13 08:48:06 PM] [INFO] running ifup on interface: eth0\\n[2018/07/13 08:48:10 PM] [INFO] running ifup on interface: vlan20\\n[2018/07/13 08:48:14 PM] [INFO] running ifup on interface: vlan30\\n[2018/07/13 08:48:18 PM] [INFO] running ifup on interface: vlan50\\n[2018/07/13 08:48:22 PM] [INFO] running ifup on interface: vlan20\\n[2018/07/13 08:48:23 PM] [INFO] running ifup on interface: vlan30\\n[2018/07/13 08:48:23 PM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-07-13 20:48:23,780] (heat-config) [DEBUG] [2018-07-13 20:48:04,512] (heat-config) [INFO] interface_name=nic1", "[2018-07-13 20:48:04,512] (heat-config) [INFO] bridge_name=br-ex", "[2018-07-13 20:48:04,512] (heat-config) [INFO] deploy_server_id=99a8e115-a0a1-4b89-8099-f4376943e467", "[2018-07-13 20:48:04,512] (heat-config) [INFO] deploy_action=CREATE", "[2018-07-13 20:48:04,512] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-ccm2x5gmqhvh-0-5fuht4b3gazf-NetworkDeployment-hmbd54sj6xp4-TripleOSoftwareDeployment-ekatt7jr24r6/5094d2c1-8ef9-4550-92bf-b7b3502358e9", "[2018-07-13 20:48:04,512] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-07-13 20:48:04,512] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-07-13 20:48:04,513] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/b1121018-c3ca-4607-b9f5-0eb38dd83734", "[2018-07-13 20:48:23,775] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", "", "[2018-07-13 20:48:23,775] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/07/13 08:48:04 PM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/07/13 08:48:04 PM] [INFO] Ifcfg net config provider created.", "[2018/07/13 08:48:04 PM] [INFO] Not using any mapping file.", "[2018/07/13 08:48:05 PM] [INFO] Finding active nics", "[2018/07/13 08:48:05 PM] [INFO] eth2 is an embedded active nic", "[2018/07/13 08:48:05 PM] [INFO] eth1 is an embedded active nic", "[2018/07/13 08:48:05 PM] [INFO] eth0 is an embedded active nic", "[2018/07/13 08:48:05 PM] [INFO] lo is not an active nic", "[2018/07/13 08:48:05 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/07/13 08:48:05 PM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/07/13 08:48:05 PM] [INFO] nic3 mapped to: eth2", "[2018/07/13 08:48:05 PM] [INFO] nic2 mapped to: eth1", "[2018/07/13 08:48:05 PM] [INFO] nic1 mapped to: eth0", "[2018/07/13 08:48:05 PM] [INFO] adding interface: eth0", "[2018/07/13 08:48:05 PM] [INFO] adding custom route for interface: eth0", "[2018/07/13 08:48:05 PM] [INFO] adding bridge: br-isolated", "[2018/07/13 08:48:05 PM] [INFO] adding interface: eth1", "[2018/07/13 08:48:05 PM] [INFO] adding vlan: vlan20", "[2018/07/13 08:48:05 PM] [INFO] adding vlan: vlan30", "[2018/07/13 08:48:05 PM] [INFO] adding vlan: vlan50", "[2018/07/13 08:48:05 PM] [INFO] adding interface: eth2", "[2018/07/13 08:48:05 PM] [INFO] applying network configs...", "[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan20", "[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan30", "[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan50", "[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: eth2", "[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: eth1", "[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: eth0", "[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan20", "[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan30", "[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan50", "[2018/07/13 08:48:05 PM] [INFO] running ifdown on bridge: br-isolated", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/07/13 08:48:05 PM] [INFO] running ifup on bridge: br-isolated", "[2018/07/13 08:48:05 PM] [INFO] running ifup on interface: eth2", "[2018/07/13 08:48:05 PM] [INFO] running ifup on interface: eth1", "[2018/07/13 08:48:06 PM] [INFO] running ifup on interface: eth0", "[2018/07/13 08:48:10 PM] [INFO] running ifup on interface: vlan20", "[2018/07/13 08:48:14 PM] [INFO] running ifup on interface: vlan30", "[2018/07/13 08:48:18 PM] [INFO] running ifup on interface: vlan50", "[2018/07/13 08:48:22 PM] [INFO] running ifup on interface: vlan20", "[2018/07/13 08:48:23 PM] [INFO] running ifup on interface: vlan30", "[2018/07/13 08:48:23 PM] [INFO] running ifup on interface: vlan50", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.3", "++ '[' -n 192.168.24.3 ']'", "++ break", "++ echo 192.168.24.3", "+ local METADATA_IP=192.168.24.3", "+ '[' -n 192.168.24.3 ']'", "+ is_local_ip 192.168.24.3", "+ local IP_TO_CHECK=192.168.24.3", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.3/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", "+ _ping=ping", "+ [[ 192.168.24.3 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.3", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-07-13 20:48:23,775] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/b1121018-c3ca-4607-b9f5-0eb38dd83734", "", "[2018-07-13 20:48:23,780] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-07-13 20:48:23,781] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b1121018-c3ca-4607-b9f5-0eb38dd83734.json < /var/lib/heat-config/deployed/b1121018-c3ca-4607-b9f5-0eb38dd83734.notify.json", "[2018-07-13 20:48:24,205] (heat-config) [INFO] ", "[2018-07-13 20:48:24,205] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:48:23,977 p=5867 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-07-13 20:48:23,977 p=5867 u=mistral | Friday 13 July 2018 20:48:23 -0400 (0:00:20.184) 0:01:47.165 *********** >2018-07-13 20:48:24,092 p=5867 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:48:04,489] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/b1121018-c3ca-4607-b9f5-0eb38dd83734.json", > "[2018-07-13 20:48:23,780] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/07/13 08:48:04 PM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/07/13 08:48:04 PM] [INFO] Ifcfg net config provider created.\\n[2018/07/13 08:48:04 PM] [INFO] Not using any mapping file.\\n[2018/07/13 08:48:05 PM] [INFO] Finding active nics\\n[2018/07/13 08:48:05 PM] [INFO] eth2 is an embedded active nic\\n[2018/07/13 08:48:05 PM] [INFO] eth1 is an embedded active nic\\n[2018/07/13 08:48:05 PM] [INFO] eth0 is an embedded active nic\\n[2018/07/13 08:48:05 PM] [INFO] lo is not an active nic\\n[2018/07/13 08:48:05 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/07/13 08:48:05 PM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/07/13 08:48:05 PM] [INFO] nic3 mapped to: eth2\\n[2018/07/13 08:48:05 PM] [INFO] nic2 mapped to: eth1\\n[2018/07/13 08:48:05 PM] [INFO] nic1 mapped to: eth0\\n[2018/07/13 08:48:05 PM] [INFO] adding interface: eth0\\n[2018/07/13 08:48:05 PM] [INFO] adding custom route for interface: eth0\\n[2018/07/13 08:48:05 PM] [INFO] adding bridge: br-isolated\\n[2018/07/13 08:48:05 PM] [INFO] adding interface: eth1\\n[2018/07/13 08:48:05 PM] [INFO] adding vlan: vlan20\\n[2018/07/13 08:48:05 PM] [INFO] adding vlan: vlan30\\n[2018/07/13 08:48:05 PM] [INFO] adding vlan: vlan50\\n[2018/07/13 08:48:05 PM] [INFO] adding interface: eth2\\n[2018/07/13 08:48:05 PM] [INFO] applying network configs...\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan20\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan30\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan50\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: eth2\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: eth1\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: eth0\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan20\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan30\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan50\\n[2018/07/13 08:48:05 PM] [INFO] running ifdown on bridge: br-isolated\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/07/13 08:48:05 PM] [INFO] running ifup on bridge: br-isolated\\n[2018/07/13 08:48:05 PM] [INFO] running ifup on interface: eth2\\n[2018/07/13 08:48:05 PM] [INFO] running ifup on interface: eth1\\n[2018/07/13 08:48:06 PM] [INFO] running ifup on interface: eth0\\n[2018/07/13 08:48:10 PM] [INFO] running ifup on interface: vlan20\\n[2018/07/13 08:48:14 PM] [INFO] running ifup on interface: vlan30\\n[2018/07/13 08:48:18 PM] [INFO] running ifup on interface: vlan50\\n[2018/07/13 08:48:22 PM] [INFO] running ifup on interface: vlan20\\n[2018/07/13 08:48:23 PM] [INFO] running ifup on interface: vlan30\\n[2018/07/13 08:48:23 PM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-07-13 20:48:23,780] (heat-config) [DEBUG] [2018-07-13 20:48:04,512] (heat-config) [INFO] interface_name=nic1", > "[2018-07-13 20:48:04,512] (heat-config) [INFO] bridge_name=br-ex", > "[2018-07-13 20:48:04,512] (heat-config) [INFO] deploy_server_id=99a8e115-a0a1-4b89-8099-f4376943e467", > "[2018-07-13 20:48:04,512] (heat-config) [INFO] deploy_action=CREATE", > "[2018-07-13 20:48:04,512] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-ccm2x5gmqhvh-0-5fuht4b3gazf-NetworkDeployment-hmbd54sj6xp4-TripleOSoftwareDeployment-ekatt7jr24r6/5094d2c1-8ef9-4550-92bf-b7b3502358e9", > "[2018-07-13 20:48:04,512] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-07-13 20:48:04,512] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-07-13 20:48:04,513] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/b1121018-c3ca-4607-b9f5-0eb38dd83734", > "[2018-07-13 20:48:23,775] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", > "", > "[2018-07-13 20:48:23,775] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/07/13 08:48:04 PM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/07/13 08:48:04 PM] [INFO] Ifcfg net config provider created.", > "[2018/07/13 08:48:04 PM] [INFO] Not using any mapping file.", > "[2018/07/13 08:48:05 PM] [INFO] Finding active nics", > "[2018/07/13 08:48:05 PM] [INFO] eth2 is an embedded active nic", > "[2018/07/13 08:48:05 PM] [INFO] eth1 is an embedded active nic", > "[2018/07/13 08:48:05 PM] [INFO] eth0 is an embedded active nic", > "[2018/07/13 08:48:05 PM] [INFO] lo is not an active nic", > "[2018/07/13 08:48:05 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/07/13 08:48:05 PM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/07/13 08:48:05 PM] [INFO] nic3 mapped to: eth2", > "[2018/07/13 08:48:05 PM] [INFO] nic2 mapped to: eth1", > "[2018/07/13 08:48:05 PM] [INFO] nic1 mapped to: eth0", > "[2018/07/13 08:48:05 PM] [INFO] adding interface: eth0", > "[2018/07/13 08:48:05 PM] [INFO] adding custom route for interface: eth0", > "[2018/07/13 08:48:05 PM] [INFO] adding bridge: br-isolated", > "[2018/07/13 08:48:05 PM] [INFO] adding interface: eth1", > "[2018/07/13 08:48:05 PM] [INFO] adding vlan: vlan20", > "[2018/07/13 08:48:05 PM] [INFO] adding vlan: vlan30", > "[2018/07/13 08:48:05 PM] [INFO] adding vlan: vlan50", > "[2018/07/13 08:48:05 PM] [INFO] adding interface: eth2", > "[2018/07/13 08:48:05 PM] [INFO] applying network configs...", > "[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan20", > "[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan30", > "[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan50", > "[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: eth2", > "[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: eth1", > "[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: eth0", > "[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan20", > "[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan30", > "[2018/07/13 08:48:05 PM] [INFO] running ifdown on interface: vlan50", > "[2018/07/13 08:48:05 PM] [INFO] running ifdown on bridge: br-isolated", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/07/13 08:48:05 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/07/13 08:48:05 PM] [INFO] running ifup on bridge: br-isolated", > "[2018/07/13 08:48:05 PM] [INFO] running ifup on interface: eth2", > "[2018/07/13 08:48:05 PM] [INFO] running ifup on interface: eth1", > "[2018/07/13 08:48:06 PM] [INFO] running ifup on interface: eth0", > "[2018/07/13 08:48:10 PM] [INFO] running ifup on interface: vlan20", > "[2018/07/13 08:48:14 PM] [INFO] running ifup on interface: vlan30", > "[2018/07/13 08:48:18 PM] [INFO] running ifup on interface: vlan50", > "[2018/07/13 08:48:22 PM] [INFO] running ifup on interface: vlan20", > "[2018/07/13 08:48:23 PM] [INFO] running ifup on interface: vlan30", > "[2018/07/13 08:48:23 PM] [INFO] running ifup on interface: vlan50", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.3", > "++ '[' -n 192.168.24.3 ']'", > "++ break", > "++ echo 192.168.24.3", > "+ local METADATA_IP=192.168.24.3", > "+ '[' -n 192.168.24.3 ']'", > "+ is_local_ip 192.168.24.3", > "+ local IP_TO_CHECK=192.168.24.3", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.3/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", > "+ _ping=ping", > "+ [[ 192.168.24.3 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.3", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-07-13 20:48:23,775] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/b1121018-c3ca-4607-b9f5-0eb38dd83734", > "", > "[2018-07-13 20:48:23,780] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-07-13 20:48:23,781] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b1121018-c3ca-4607-b9f5-0eb38dd83734.json < /var/lib/heat-config/deployed/b1121018-c3ca-4607-b9f5-0eb38dd83734.notify.json", > "[2018-07-13 20:48:24,205] (heat-config) [INFO] ", > "[2018-07-13 20:48:24,205] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:48:24,116 p=5867 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment] ************************* >2018-07-13 20:48:24,117 p=5867 u=mistral | Friday 13 July 2018 20:48:24 -0400 (0:00:00.139) 0:01:47.304 *********** >2018-07-13 20:48:24,132 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:24,151 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:48:24,151 p=5867 u=mistral | Friday 13 July 2018 20:48:24 -0400 (0:00:00.034) 0:01:47.339 *********** >2018-07-13 20:48:24,258 p=5867 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "f4fb9629-3573-47b2-8d29-384450803dbd"}, "changed": false} >2018-07-13 20:48:24,326 p=5867 u=mistral | TASK [Render deployment file for NovaComputeUpgradeInitDeployment] ************* >2018-07-13 20:48:24,327 p=5867 u=mistral | Friday 13 July 2018 20:48:24 -0400 (0:00:00.175) 0:01:47.514 *********** >2018-07-13 20:48:24,979 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "67f850f3cc119173f06c6a719d09a3ce7844c935", "dest": "/var/lib/heat-config/tripleo-config-download/NovaComputeUpgradeInitDeployment-f4fb9629-3573-47b2-8d29-384450803dbd", "gid": 0, "group": "root", "md5sum": "14db3029f5e46f2e75d1f42854830824", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1182, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529304.38-276540188069789/source", "state": "file", "uid": 0} >2018-07-13 20:48:25,000 p=5867 u=mistral | TASK [Check if deployed file exists for NovaComputeUpgradeInitDeployment] ****** >2018-07-13 20:48:25,000 p=5867 u=mistral | Friday 13 July 2018 20:48:24 -0400 (0:00:00.673) 0:01:48.188 *********** >2018-07-13 20:48:25,347 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:48:25,368 p=5867 u=mistral | TASK [Check previous deployment rc for NovaComputeUpgradeInitDeployment] ******* >2018-07-13 20:48:25,368 p=5867 u=mistral | Friday 13 July 2018 20:48:25 -0400 (0:00:00.367) 0:01:48.556 *********** >2018-07-13 20:48:25,388 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:25,408 p=5867 u=mistral | TASK [Remove deployed file for NovaComputeUpgradeInitDeployment when previous deployment failed] *** >2018-07-13 20:48:25,408 p=5867 u=mistral | Friday 13 July 2018 20:48:25 -0400 (0:00:00.039) 0:01:48.596 *********** >2018-07-13 20:48:25,427 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:25,447 p=5867 u=mistral | TASK [Force remove deployed file for NovaComputeUpgradeInitDeployment] ********* >2018-07-13 20:48:25,447 p=5867 u=mistral | Friday 13 July 2018 20:48:25 -0400 (0:00:00.038) 0:01:48.635 *********** >2018-07-13 20:48:25,465 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:25,487 p=5867 u=mistral | TASK [Run deployment NovaComputeUpgradeInitDeployment] ************************* >2018-07-13 20:48:25,487 p=5867 u=mistral | Friday 13 July 2018 20:48:25 -0400 (0:00:00.039) 0:01:48.675 *********** >2018-07-13 20:48:26,333 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/f4fb9629-3573-47b2-8d29-384450803dbd.notify.json)", "delta": "0:00:00.496031", "end": "2018-07-13 20:48:26.601631", "rc": 0, "start": "2018-07-13 20:48:26.105600", "stderr": "[2018-07-13 20:48:26,132] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f4fb9629-3573-47b2-8d29-384450803dbd.json\n[2018-07-13 20:48:26,158] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-07-13 20:48:26,159] (heat-config) [DEBUG] [2018-07-13 20:48:26,151] (heat-config) [INFO] deploy_server_id=99a8e115-a0a1-4b89-8099-f4376943e467\n[2018-07-13 20:48:26,151] (heat-config) [INFO] deploy_action=CREATE\n[2018-07-13 20:48:26,151] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-ccm2x5gmqhvh-0-5fuht4b3gazf-NovaComputeUpgradeInitDeployment-xkithy55hgkl/af29aa0d-72e6-4159-8cff-13f6f26507a8\n[2018-07-13 20:48:26,151] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-07-13 20:48:26,151] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-07-13 20:48:26,152] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f4fb9629-3573-47b2-8d29-384450803dbd\n[2018-07-13 20:48:26,155] (heat-config) [INFO] \n[2018-07-13 20:48:26,155] (heat-config) [DEBUG] \n[2018-07-13 20:48:26,156] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f4fb9629-3573-47b2-8d29-384450803dbd\n\n[2018-07-13 20:48:26,159] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-07-13 20:48:26,159] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f4fb9629-3573-47b2-8d29-384450803dbd.json < /var/lib/heat-config/deployed/f4fb9629-3573-47b2-8d29-384450803dbd.notify.json\n[2018-07-13 20:48:26,594] (heat-config) [INFO] \n[2018-07-13 20:48:26,594] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:48:26,132] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f4fb9629-3573-47b2-8d29-384450803dbd.json", "[2018-07-13 20:48:26,158] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-07-13 20:48:26,159] (heat-config) [DEBUG] [2018-07-13 20:48:26,151] (heat-config) [INFO] deploy_server_id=99a8e115-a0a1-4b89-8099-f4376943e467", "[2018-07-13 20:48:26,151] (heat-config) [INFO] deploy_action=CREATE", "[2018-07-13 20:48:26,151] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-ccm2x5gmqhvh-0-5fuht4b3gazf-NovaComputeUpgradeInitDeployment-xkithy55hgkl/af29aa0d-72e6-4159-8cff-13f6f26507a8", "[2018-07-13 20:48:26,151] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-07-13 20:48:26,151] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-07-13 20:48:26,152] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f4fb9629-3573-47b2-8d29-384450803dbd", "[2018-07-13 20:48:26,155] (heat-config) [INFO] ", "[2018-07-13 20:48:26,155] (heat-config) [DEBUG] ", "[2018-07-13 20:48:26,156] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f4fb9629-3573-47b2-8d29-384450803dbd", "", "[2018-07-13 20:48:26,159] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-07-13 20:48:26,159] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f4fb9629-3573-47b2-8d29-384450803dbd.json < /var/lib/heat-config/deployed/f4fb9629-3573-47b2-8d29-384450803dbd.notify.json", "[2018-07-13 20:48:26,594] (heat-config) [INFO] ", "[2018-07-13 20:48:26,594] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:48:26,355 p=5867 u=mistral | TASK [Output for NovaComputeUpgradeInitDeployment] ***************************** >2018-07-13 20:48:26,355 p=5867 u=mistral | Friday 13 July 2018 20:48:26 -0400 (0:00:00.868) 0:01:49.543 *********** >2018-07-13 20:48:26,407 p=5867 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:48:26,132] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f4fb9629-3573-47b2-8d29-384450803dbd.json", > "[2018-07-13 20:48:26,158] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-07-13 20:48:26,159] (heat-config) [DEBUG] [2018-07-13 20:48:26,151] (heat-config) [INFO] deploy_server_id=99a8e115-a0a1-4b89-8099-f4376943e467", > "[2018-07-13 20:48:26,151] (heat-config) [INFO] deploy_action=CREATE", > "[2018-07-13 20:48:26,151] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-ccm2x5gmqhvh-0-5fuht4b3gazf-NovaComputeUpgradeInitDeployment-xkithy55hgkl/af29aa0d-72e6-4159-8cff-13f6f26507a8", > "[2018-07-13 20:48:26,151] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-07-13 20:48:26,151] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-07-13 20:48:26,152] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f4fb9629-3573-47b2-8d29-384450803dbd", > "[2018-07-13 20:48:26,155] (heat-config) [INFO] ", > "[2018-07-13 20:48:26,155] (heat-config) [DEBUG] ", > "[2018-07-13 20:48:26,156] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f4fb9629-3573-47b2-8d29-384450803dbd", > "", > "[2018-07-13 20:48:26,159] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-07-13 20:48:26,159] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f4fb9629-3573-47b2-8d29-384450803dbd.json < /var/lib/heat-config/deployed/f4fb9629-3573-47b2-8d29-384450803dbd.notify.json", > "[2018-07-13 20:48:26,594] (heat-config) [INFO] ", > "[2018-07-13 20:48:26,594] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:48:26,428 p=5867 u=mistral | TASK [Check-mode for Run deployment NovaComputeUpgradeInitDeployment] ********** >2018-07-13 20:48:26,428 p=5867 u=mistral | Friday 13 July 2018 20:48:26 -0400 (0:00:00.072) 0:01:49.616 *********** >2018-07-13 20:48:26,444 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:26,463 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:48:26,463 p=5867 u=mistral | Friday 13 July 2018 20:48:26 -0400 (0:00:00.034) 0:01:49.651 *********** >2018-07-13 20:48:26,601 p=5867 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "74851bed-3b64-4715-8f01-98803a7b6bd1"}, "changed": false} >2018-07-13 20:48:26,622 p=5867 u=mistral | TASK [Render deployment file for NovaComputeDeployment] ************************ >2018-07-13 20:48:26,623 p=5867 u=mistral | Friday 13 July 2018 20:48:26 -0400 (0:00:00.159) 0:01:49.811 *********** >2018-07-13 20:48:27,372 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "75deaebe1a790aca6608f62be6bd979533e9e942", "dest": "/var/lib/heat-config/tripleo-config-download/NovaComputeDeployment-74851bed-3b64-4715-8f01-98803a7b6bd1", "gid": 0, "group": "root", "md5sum": "db65bdd42faa8982934f96ee65c81310", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 21996, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529306.77-126917223647484/source", "state": "file", "uid": 0} >2018-07-13 20:48:27,393 p=5867 u=mistral | TASK [Check if deployed file exists for NovaComputeDeployment] ***************** >2018-07-13 20:48:27,393 p=5867 u=mistral | Friday 13 July 2018 20:48:27 -0400 (0:00:00.770) 0:01:50.581 *********** >2018-07-13 20:48:27,741 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:48:27,762 p=5867 u=mistral | TASK [Check previous deployment rc for NovaComputeDeployment] ****************** >2018-07-13 20:48:27,762 p=5867 u=mistral | Friday 13 July 2018 20:48:27 -0400 (0:00:00.369) 0:01:50.950 *********** >2018-07-13 20:48:27,781 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:27,802 p=5867 u=mistral | TASK [Remove deployed file for NovaComputeDeployment when previous deployment failed] *** >2018-07-13 20:48:27,802 p=5867 u=mistral | Friday 13 July 2018 20:48:27 -0400 (0:00:00.040) 0:01:50.990 *********** >2018-07-13 20:48:27,822 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:27,842 p=5867 u=mistral | TASK [Force remove deployed file for NovaComputeDeployment] ******************** >2018-07-13 20:48:27,842 p=5867 u=mistral | Friday 13 July 2018 20:48:27 -0400 (0:00:00.039) 0:01:51.030 *********** >2018-07-13 20:48:27,859 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:27,880 p=5867 u=mistral | TASK [Run deployment NovaComputeDeployment] ************************************ >2018-07-13 20:48:27,881 p=5867 u=mistral | Friday 13 July 2018 20:48:27 -0400 (0:00:00.038) 0:01:51.069 *********** >2018-07-13 20:48:28,840 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/74851bed-3b64-4715-8f01-98803a7b6bd1.notify.json)", "delta": "0:00:00.605753", "end": "2018-07-13 20:48:29.111060", "rc": 0, "start": "2018-07-13 20:48:28.505307", "stderr": "[2018-07-13 20:48:28,533] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/74851bed-3b64-4715-8f01-98803a7b6bd1.json\n[2018-07-13 20:48:28,661] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-07-13 20:48:28,661] (heat-config) [DEBUG] \n[2018-07-13 20:48:28,661] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-07-13 20:48:28,662] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/74851bed-3b64-4715-8f01-98803a7b6bd1.json < /var/lib/heat-config/deployed/74851bed-3b64-4715-8f01-98803a7b6bd1.notify.json\n[2018-07-13 20:48:29,104] (heat-config) [INFO] \n[2018-07-13 20:48:29,104] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:48:28,533] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/74851bed-3b64-4715-8f01-98803a7b6bd1.json", "[2018-07-13 20:48:28,661] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-07-13 20:48:28,661] (heat-config) [DEBUG] ", "[2018-07-13 20:48:28,661] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-07-13 20:48:28,662] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/74851bed-3b64-4715-8f01-98803a7b6bd1.json < /var/lib/heat-config/deployed/74851bed-3b64-4715-8f01-98803a7b6bd1.notify.json", "[2018-07-13 20:48:29,104] (heat-config) [INFO] ", "[2018-07-13 20:48:29,104] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:48:28,860 p=5867 u=mistral | TASK [Output for NovaComputeDeployment] **************************************** >2018-07-13 20:48:28,861 p=5867 u=mistral | Friday 13 July 2018 20:48:28 -0400 (0:00:00.979) 0:01:52.049 *********** >2018-07-13 20:48:28,909 p=5867 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:48:28,533] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/74851bed-3b64-4715-8f01-98803a7b6bd1.json", > "[2018-07-13 20:48:28,661] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-07-13 20:48:28,661] (heat-config) [DEBUG] ", > "[2018-07-13 20:48:28,661] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-07-13 20:48:28,662] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/74851bed-3b64-4715-8f01-98803a7b6bd1.json < /var/lib/heat-config/deployed/74851bed-3b64-4715-8f01-98803a7b6bd1.notify.json", > "[2018-07-13 20:48:29,104] (heat-config) [INFO] ", > "[2018-07-13 20:48:29,104] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:48:28,930 p=5867 u=mistral | TASK [Check-mode for Run deployment NovaComputeDeployment] ********************* >2018-07-13 20:48:28,930 p=5867 u=mistral | Friday 13 July 2018 20:48:28 -0400 (0:00:00.069) 0:01:52.118 *********** >2018-07-13 20:48:28,944 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:28,963 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:48:28,964 p=5867 u=mistral | Friday 13 July 2018 20:48:28 -0400 (0:00:00.033) 0:01:52.152 *********** >2018-07-13 20:48:29,019 p=5867 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "f1e46d8c-3d0a-4681-98a5-fa0883d0b7db"}, "changed": false} >2018-07-13 20:48:29,039 p=5867 u=mistral | TASK [Render deployment file for ComputeHostsDeployment] *********************** >2018-07-13 20:48:29,039 p=5867 u=mistral | Friday 13 July 2018 20:48:29 -0400 (0:00:00.075) 0:01:52.227 *********** >2018-07-13 20:48:29,675 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "f2a0d05a6aa37a7e61347334a56c6736cd657bd6", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeHostsDeployment-f1e46d8c-3d0a-4681-98a5-fa0883d0b7db", "gid": 0, "group": "root", "md5sum": "10e735be61556cd63739777758a7f50e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4423, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529309.1-200130454608821/source", "state": "file", "uid": 0} >2018-07-13 20:48:29,696 p=5867 u=mistral | TASK [Check if deployed file exists for ComputeHostsDeployment] **************** >2018-07-13 20:48:29,696 p=5867 u=mistral | Friday 13 July 2018 20:48:29 -0400 (0:00:00.656) 0:01:52.884 *********** >2018-07-13 20:48:30,043 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:48:30,063 p=5867 u=mistral | TASK [Check previous deployment rc for ComputeHostsDeployment] ***************** >2018-07-13 20:48:30,063 p=5867 u=mistral | Friday 13 July 2018 20:48:30 -0400 (0:00:00.367) 0:01:53.251 *********** >2018-07-13 20:48:30,082 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:30,102 p=5867 u=mistral | TASK [Remove deployed file for ComputeHostsDeployment when previous deployment failed] *** >2018-07-13 20:48:30,102 p=5867 u=mistral | Friday 13 July 2018 20:48:30 -0400 (0:00:00.038) 0:01:53.290 *********** >2018-07-13 20:48:30,120 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:30,139 p=5867 u=mistral | TASK [Force remove deployed file for ComputeHostsDeployment] ******************* >2018-07-13 20:48:30,139 p=5867 u=mistral | Friday 13 July 2018 20:48:30 -0400 (0:00:00.037) 0:01:53.327 *********** >2018-07-13 20:48:30,157 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:30,178 p=5867 u=mistral | TASK [Run deployment ComputeHostsDeployment] *********************************** >2018-07-13 20:48:30,179 p=5867 u=mistral | Friday 13 July 2018 20:48:30 -0400 (0:00:00.039) 0:01:53.367 *********** >2018-07-13 20:48:31,082 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/f1e46d8c-3d0a-4681-98a5-fa0883d0b7db.notify.json)", "delta": "0:00:00.525573", "end": "2018-07-13 20:48:31.324399", "rc": 0, "start": "2018-07-13 20:48:30.798826", "stderr": "[2018-07-13 20:48:30,824] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f1e46d8c-3d0a-4681-98a5-fa0883d0b7db.json\n[2018-07-13 20:48:30,877] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-07-13 20:48:30,877] (heat-config) [DEBUG] [2018-07-13 20:48:30,846] (heat-config) [INFO] hosts=192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-07-13 20:48:30,846] (heat-config) [INFO] deploy_server_id=99a8e115-a0a1-4b89-8099-f4376943e467\n[2018-07-13 20:48:30,846] (heat-config) [INFO] deploy_action=CREATE\n[2018-07-13 20:48:30,846] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-43brg3qxcgll-0-twuw7jtrlra7/6a7e6697-0b1e-4f1b-9f2f-56a575483282\n[2018-07-13 20:48:30,846] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-07-13 20:48:30,846] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-07-13 20:48:30,847] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f1e46d8c-3d0a-4681-98a5-fa0883d0b7db\n[2018-07-13 20:48:30,873] (heat-config) [INFO] \n[2018-07-13 20:48:30,873] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n++ hostname -s\n+ sed -i /compute-0/d /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-07-13 20:48:30,873] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f1e46d8c-3d0a-4681-98a5-fa0883d0b7db\n\n[2018-07-13 20:48:30,877] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-07-13 20:48:30,878] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f1e46d8c-3d0a-4681-98a5-fa0883d0b7db.json < /var/lib/heat-config/deployed/f1e46d8c-3d0a-4681-98a5-fa0883d0b7db.notify.json\n[2018-07-13 20:48:31,317] (heat-config) [INFO] \n[2018-07-13 20:48:31,317] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:48:30,824] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f1e46d8c-3d0a-4681-98a5-fa0883d0b7db.json", "[2018-07-13 20:48:30,877] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-07-13 20:48:30,877] (heat-config) [DEBUG] [2018-07-13 20:48:30,846] (heat-config) [INFO] hosts=192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-07-13 20:48:30,846] (heat-config) [INFO] deploy_server_id=99a8e115-a0a1-4b89-8099-f4376943e467", "[2018-07-13 20:48:30,846] (heat-config) [INFO] deploy_action=CREATE", "[2018-07-13 20:48:30,846] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-43brg3qxcgll-0-twuw7jtrlra7/6a7e6697-0b1e-4f1b-9f2f-56a575483282", "[2018-07-13 20:48:30,846] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-07-13 20:48:30,846] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-07-13 20:48:30,847] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f1e46d8c-3d0a-4681-98a5-fa0883d0b7db", "[2018-07-13 20:48:30,873] (heat-config) [INFO] ", "[2018-07-13 20:48:30,873] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "++ hostname -s", "+ sed -i /compute-0/d /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-07-13 20:48:30,873] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f1e46d8c-3d0a-4681-98a5-fa0883d0b7db", "", "[2018-07-13 20:48:30,877] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-07-13 20:48:30,878] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f1e46d8c-3d0a-4681-98a5-fa0883d0b7db.json < /var/lib/heat-config/deployed/f1e46d8c-3d0a-4681-98a5-fa0883d0b7db.notify.json", "[2018-07-13 20:48:31,317] (heat-config) [INFO] ", "[2018-07-13 20:48:31,317] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:48:31,120 p=5867 u=mistral | TASK [Output for ComputeHostsDeployment] *************************************** >2018-07-13 20:48:31,120 p=5867 u=mistral | Friday 13 July 2018 20:48:31 -0400 (0:00:00.941) 0:01:54.308 *********** >2018-07-13 20:48:31,201 p=5867 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:48:30,824] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f1e46d8c-3d0a-4681-98a5-fa0883d0b7db.json", > "[2018-07-13 20:48:30,877] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-07-13 20:48:30,877] (heat-config) [DEBUG] [2018-07-13 20:48:30,846] (heat-config) [INFO] hosts=192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-07-13 20:48:30,846] (heat-config) [INFO] deploy_server_id=99a8e115-a0a1-4b89-8099-f4376943e467", > "[2018-07-13 20:48:30,846] (heat-config) [INFO] deploy_action=CREATE", > "[2018-07-13 20:48:30,846] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-43brg3qxcgll-0-twuw7jtrlra7/6a7e6697-0b1e-4f1b-9f2f-56a575483282", > "[2018-07-13 20:48:30,846] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-07-13 20:48:30,846] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-07-13 20:48:30,847] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f1e46d8c-3d0a-4681-98a5-fa0883d0b7db", > "[2018-07-13 20:48:30,873] (heat-config) [INFO] ", > "[2018-07-13 20:48:30,873] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-07-13 20:48:30,873] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f1e46d8c-3d0a-4681-98a5-fa0883d0b7db", > "", > "[2018-07-13 20:48:30,877] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-07-13 20:48:30,878] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f1e46d8c-3d0a-4681-98a5-fa0883d0b7db.json < /var/lib/heat-config/deployed/f1e46d8c-3d0a-4681-98a5-fa0883d0b7db.notify.json", > "[2018-07-13 20:48:31,317] (heat-config) [INFO] ", > "[2018-07-13 20:48:31,317] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:48:31,236 p=5867 u=mistral | TASK [Check-mode for Run deployment ComputeHostsDeployment] ******************** >2018-07-13 20:48:31,236 p=5867 u=mistral | Friday 13 July 2018 20:48:31 -0400 (0:00:00.116) 0:01:54.424 *********** >2018-07-13 20:48:31,252 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:31,270 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:48:31,270 p=5867 u=mistral | Friday 13 July 2018 20:48:31 -0400 (0:00:00.034) 0:01:54.458 *********** >2018-07-13 20:48:31,404 p=5867 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "b8b2a622-476c-472a-b5b3-f5353755c217"}, "changed": false} >2018-07-13 20:48:31,424 p=5867 u=mistral | TASK [Render deployment file for ComputeAllNodesDeployment] ******************** >2018-07-13 20:48:31,425 p=5867 u=mistral | Friday 13 July 2018 20:48:31 -0400 (0:00:00.154) 0:01:54.612 *********** >2018-07-13 20:48:32,136 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "f74dd94001038fa1e17a19fcc77f5e8b3061caab", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeAllNodesDeployment-b8b2a622-476c-472a-b5b3-f5353755c217", "gid": 0, "group": "root", "md5sum": "6e154f4e24239a2f72c25748e7090181", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19020, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529311.57-68820246651346/source", "state": "file", "uid": 0} >2018-07-13 20:48:32,155 p=5867 u=mistral | TASK [Check if deployed file exists for ComputeAllNodesDeployment] ************* >2018-07-13 20:48:32,155 p=5867 u=mistral | Friday 13 July 2018 20:48:32 -0400 (0:00:00.730) 0:01:55.343 *********** >2018-07-13 20:48:32,492 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:48:32,513 p=5867 u=mistral | TASK [Check previous deployment rc for ComputeAllNodesDeployment] ************** >2018-07-13 20:48:32,513 p=5867 u=mistral | Friday 13 July 2018 20:48:32 -0400 (0:00:00.358) 0:01:55.701 *********** >2018-07-13 20:48:32,531 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:32,551 p=5867 u=mistral | TASK [Remove deployed file for ComputeAllNodesDeployment when previous deployment failed] *** >2018-07-13 20:48:32,551 p=5867 u=mistral | Friday 13 July 2018 20:48:32 -0400 (0:00:00.038) 0:01:55.739 *********** >2018-07-13 20:48:32,568 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:32,588 p=5867 u=mistral | TASK [Force remove deployed file for ComputeAllNodesDeployment] **************** >2018-07-13 20:48:32,588 p=5867 u=mistral | Friday 13 July 2018 20:48:32 -0400 (0:00:00.036) 0:01:55.776 *********** >2018-07-13 20:48:32,604 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:32,624 p=5867 u=mistral | TASK [Run deployment ComputeAllNodesDeployment] ******************************** >2018-07-13 20:48:32,624 p=5867 u=mistral | Friday 13 July 2018 20:48:32 -0400 (0:00:00.035) 0:01:55.812 *********** >2018-07-13 20:48:33,523 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/b8b2a622-476c-472a-b5b3-f5353755c217.notify.json)", "delta": "0:00:00.558583", "end": "2018-07-13 20:48:33.792775", "rc": 0, "start": "2018-07-13 20:48:33.234192", "stderr": "[2018-07-13 20:48:33,260] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/b8b2a622-476c-472a-b5b3-f5353755c217.json\n[2018-07-13 20:48:33,381] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-07-13 20:48:33,381] (heat-config) [DEBUG] \n[2018-07-13 20:48:33,381] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-07-13 20:48:33,382] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b8b2a622-476c-472a-b5b3-f5353755c217.json < /var/lib/heat-config/deployed/b8b2a622-476c-472a-b5b3-f5353755c217.notify.json\n[2018-07-13 20:48:33,786] (heat-config) [INFO] \n[2018-07-13 20:48:33,786] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:48:33,260] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/b8b2a622-476c-472a-b5b3-f5353755c217.json", "[2018-07-13 20:48:33,381] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-07-13 20:48:33,381] (heat-config) [DEBUG] ", "[2018-07-13 20:48:33,381] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-07-13 20:48:33,382] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b8b2a622-476c-472a-b5b3-f5353755c217.json < /var/lib/heat-config/deployed/b8b2a622-476c-472a-b5b3-f5353755c217.notify.json", "[2018-07-13 20:48:33,786] (heat-config) [INFO] ", "[2018-07-13 20:48:33,786] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:48:33,545 p=5867 u=mistral | TASK [Output for ComputeAllNodesDeployment] ************************************ >2018-07-13 20:48:33,546 p=5867 u=mistral | Friday 13 July 2018 20:48:33 -0400 (0:00:00.921) 0:01:56.733 *********** >2018-07-13 20:48:33,596 p=5867 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:48:33,260] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/b8b2a622-476c-472a-b5b3-f5353755c217.json", > "[2018-07-13 20:48:33,381] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-07-13 20:48:33,381] (heat-config) [DEBUG] ", > "[2018-07-13 20:48:33,381] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-07-13 20:48:33,382] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b8b2a622-476c-472a-b5b3-f5353755c217.json < /var/lib/heat-config/deployed/b8b2a622-476c-472a-b5b3-f5353755c217.notify.json", > "[2018-07-13 20:48:33,786] (heat-config) [INFO] ", > "[2018-07-13 20:48:33,786] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:48:33,617 p=5867 u=mistral | TASK [Check-mode for Run deployment ComputeAllNodesDeployment] ***************** >2018-07-13 20:48:33,617 p=5867 u=mistral | Friday 13 July 2018 20:48:33 -0400 (0:00:00.071) 0:01:56.805 *********** >2018-07-13 20:48:33,634 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:33,655 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:48:33,655 p=5867 u=mistral | Friday 13 July 2018 20:48:33 -0400 (0:00:00.037) 0:01:56.843 *********** >2018-07-13 20:48:33,715 p=5867 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "756b7ef6-e58e-4f79-9df5-bb4a9fca6790"}, "changed": false} >2018-07-13 20:48:33,736 p=5867 u=mistral | TASK [Render deployment file for ComputeAllNodesValidationDeployment] ********** >2018-07-13 20:48:33,737 p=5867 u=mistral | Friday 13 July 2018 20:48:33 -0400 (0:00:00.081) 0:01:56.925 *********** >2018-07-13 20:48:34,367 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "a27a664a9e4d6b706443908690bf8e3a5e578a81", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeAllNodesValidationDeployment-756b7ef6-e58e-4f79-9df5-bb4a9fca6790", "gid": 0, "group": "root", "md5sum": "f42b0c4dc95b01bc2c706bf134ad1dec", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4934, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529313.8-280884490839648/source", "state": "file", "uid": 0} >2018-07-13 20:48:34,387 p=5867 u=mistral | TASK [Check if deployed file exists for ComputeAllNodesValidationDeployment] *** >2018-07-13 20:48:34,388 p=5867 u=mistral | Friday 13 July 2018 20:48:34 -0400 (0:00:00.651) 0:01:57.576 *********** >2018-07-13 20:48:34,739 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:48:34,761 p=5867 u=mistral | TASK [Check previous deployment rc for ComputeAllNodesValidationDeployment] **** >2018-07-13 20:48:34,761 p=5867 u=mistral | Friday 13 July 2018 20:48:34 -0400 (0:00:00.373) 0:01:57.949 *********** >2018-07-13 20:48:34,781 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:34,803 p=5867 u=mistral | TASK [Remove deployed file for ComputeAllNodesValidationDeployment when previous deployment failed] *** >2018-07-13 20:48:34,803 p=5867 u=mistral | Friday 13 July 2018 20:48:34 -0400 (0:00:00.041) 0:01:57.991 *********** >2018-07-13 20:48:34,823 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:34,844 p=5867 u=mistral | TASK [Force remove deployed file for ComputeAllNodesValidationDeployment] ****** >2018-07-13 20:48:34,844 p=5867 u=mistral | Friday 13 July 2018 20:48:34 -0400 (0:00:00.041) 0:01:58.032 *********** >2018-07-13 20:48:34,863 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:34,885 p=5867 u=mistral | TASK [Run deployment ComputeAllNodesValidationDeployment] ********************** >2018-07-13 20:48:34,886 p=5867 u=mistral | Friday 13 July 2018 20:48:34 -0400 (0:00:00.041) 0:01:58.074 *********** >2018-07-13 20:48:36,309 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/756b7ef6-e58e-4f79-9df5-bb4a9fca6790.notify.json)", "delta": "0:00:01.074192", "end": "2018-07-13 20:48:36.580393", "rc": 0, "start": "2018-07-13 20:48:35.506201", "stderr": "[2018-07-13 20:48:35,528] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/756b7ef6-e58e-4f79-9df5-bb4a9fca6790.json\n[2018-07-13 20:48:36,111] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.19 for local network 172.17.1.0/24.\\nPing to 172.17.1.19 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\\nPing to 172.17.2.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.20 for local network 172.17.3.0/24.\\nPing to 172.17.3.20 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.7 for local network 192.168.24.0/24.\\nPing to 192.168.24.7 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-07-13 20:48:36,111] (heat-config) [DEBUG] [2018-07-13 20:48:35,552] (heat-config) [INFO] ping_test_ips=172.17.3.20 172.17.4.18 172.17.1.19 172.17.2.15 10.0.0.106 192.168.24.7\n[2018-07-13 20:48:35,552] (heat-config) [INFO] validate_fqdn=False\n[2018-07-13 20:48:35,553] (heat-config) [INFO] validate_ntp=True\n[2018-07-13 20:48:35,553] (heat-config) [INFO] deploy_server_id=99a8e115-a0a1-4b89-8099-f4376943e467\n[2018-07-13 20:48:35,553] (heat-config) [INFO] deploy_action=CREATE\n[2018-07-13 20:48:35,553] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-egha7pt4kvz6-0-ht3j3uuf6aeg/34d5286e-84d1-44fb-8ca4-cdc3928423ba\n[2018-07-13 20:48:35,553] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-07-13 20:48:35,553] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-07-13 20:48:35,553] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/756b7ef6-e58e-4f79-9df5-bb4a9fca6790\n[2018-07-13 20:48:36,106] (heat-config) [INFO] Trying to ping 172.17.1.19 for local network 172.17.1.0/24.\nPing to 172.17.1.19 succeeded.\nSUCCESS\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\nPing to 172.17.2.15 succeeded.\nSUCCESS\nTrying to ping 172.17.3.20 for local network 172.17.3.0/24.\nPing to 172.17.3.20 succeeded.\nSUCCESS\nTrying to ping 192.168.24.7 for local network 192.168.24.0/24.\nPing to 192.168.24.7 succeeded.\nSUCCESS\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\nSUCCESS\n\n[2018-07-13 20:48:36,106] (heat-config) [DEBUG] \n[2018-07-13 20:48:36,106] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/756b7ef6-e58e-4f79-9df5-bb4a9fca6790\n\n[2018-07-13 20:48:36,111] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-07-13 20:48:36,112] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/756b7ef6-e58e-4f79-9df5-bb4a9fca6790.json < /var/lib/heat-config/deployed/756b7ef6-e58e-4f79-9df5-bb4a9fca6790.notify.json\n[2018-07-13 20:48:36,573] (heat-config) [INFO] \n[2018-07-13 20:48:36,573] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:48:35,528] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/756b7ef6-e58e-4f79-9df5-bb4a9fca6790.json", "[2018-07-13 20:48:36,111] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.19 for local network 172.17.1.0/24.\\nPing to 172.17.1.19 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\\nPing to 172.17.2.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.20 for local network 172.17.3.0/24.\\nPing to 172.17.3.20 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.7 for local network 192.168.24.0/24.\\nPing to 192.168.24.7 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-07-13 20:48:36,111] (heat-config) [DEBUG] [2018-07-13 20:48:35,552] (heat-config) [INFO] ping_test_ips=172.17.3.20 172.17.4.18 172.17.1.19 172.17.2.15 10.0.0.106 192.168.24.7", "[2018-07-13 20:48:35,552] (heat-config) [INFO] validate_fqdn=False", "[2018-07-13 20:48:35,553] (heat-config) [INFO] validate_ntp=True", "[2018-07-13 20:48:35,553] (heat-config) [INFO] deploy_server_id=99a8e115-a0a1-4b89-8099-f4376943e467", "[2018-07-13 20:48:35,553] (heat-config) [INFO] deploy_action=CREATE", "[2018-07-13 20:48:35,553] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-egha7pt4kvz6-0-ht3j3uuf6aeg/34d5286e-84d1-44fb-8ca4-cdc3928423ba", "[2018-07-13 20:48:35,553] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-07-13 20:48:35,553] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-07-13 20:48:35,553] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/756b7ef6-e58e-4f79-9df5-bb4a9fca6790", "[2018-07-13 20:48:36,106] (heat-config) [INFO] Trying to ping 172.17.1.19 for local network 172.17.1.0/24.", "Ping to 172.17.1.19 succeeded.", "SUCCESS", "Trying to ping 172.17.2.15 for local network 172.17.2.0/24.", "Ping to 172.17.2.15 succeeded.", "SUCCESS", "Trying to ping 172.17.3.20 for local network 172.17.3.0/24.", "Ping to 172.17.3.20 succeeded.", "SUCCESS", "Trying to ping 192.168.24.7 for local network 192.168.24.0/24.", "Ping to 192.168.24.7 succeeded.", "SUCCESS", "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", "SUCCESS", "", "[2018-07-13 20:48:36,106] (heat-config) [DEBUG] ", "[2018-07-13 20:48:36,106] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/756b7ef6-e58e-4f79-9df5-bb4a9fca6790", "", "[2018-07-13 20:48:36,111] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-07-13 20:48:36,112] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/756b7ef6-e58e-4f79-9df5-bb4a9fca6790.json < /var/lib/heat-config/deployed/756b7ef6-e58e-4f79-9df5-bb4a9fca6790.notify.json", "[2018-07-13 20:48:36,573] (heat-config) [INFO] ", "[2018-07-13 20:48:36,573] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:48:36,331 p=5867 u=mistral | TASK [Output for ComputeAllNodesValidationDeployment] ************************** >2018-07-13 20:48:36,331 p=5867 u=mistral | Friday 13 July 2018 20:48:36 -0400 (0:00:01.445) 0:01:59.519 *********** >2018-07-13 20:48:36,383 p=5867 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:48:35,528] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/756b7ef6-e58e-4f79-9df5-bb4a9fca6790.json", > "[2018-07-13 20:48:36,111] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.19 for local network 172.17.1.0/24.\\nPing to 172.17.1.19 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\\nPing to 172.17.2.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.20 for local network 172.17.3.0/24.\\nPing to 172.17.3.20 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.7 for local network 192.168.24.0/24.\\nPing to 192.168.24.7 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-07-13 20:48:36,111] (heat-config) [DEBUG] [2018-07-13 20:48:35,552] (heat-config) [INFO] ping_test_ips=172.17.3.20 172.17.4.18 172.17.1.19 172.17.2.15 10.0.0.106 192.168.24.7", > "[2018-07-13 20:48:35,552] (heat-config) [INFO] validate_fqdn=False", > "[2018-07-13 20:48:35,553] (heat-config) [INFO] validate_ntp=True", > "[2018-07-13 20:48:35,553] (heat-config) [INFO] deploy_server_id=99a8e115-a0a1-4b89-8099-f4376943e467", > "[2018-07-13 20:48:35,553] (heat-config) [INFO] deploy_action=CREATE", > "[2018-07-13 20:48:35,553] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-egha7pt4kvz6-0-ht3j3uuf6aeg/34d5286e-84d1-44fb-8ca4-cdc3928423ba", > "[2018-07-13 20:48:35,553] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-07-13 20:48:35,553] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-07-13 20:48:35,553] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/756b7ef6-e58e-4f79-9df5-bb4a9fca6790", > "[2018-07-13 20:48:36,106] (heat-config) [INFO] Trying to ping 172.17.1.19 for local network 172.17.1.0/24.", > "Ping to 172.17.1.19 succeeded.", > "SUCCESS", > "Trying to ping 172.17.2.15 for local network 172.17.2.0/24.", > "Ping to 172.17.2.15 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.20 for local network 172.17.3.0/24.", > "Ping to 172.17.3.20 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.7 for local network 192.168.24.0/24.", > "Ping to 192.168.24.7 succeeded.", > "SUCCESS", > "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", > "SUCCESS", > "", > "[2018-07-13 20:48:36,106] (heat-config) [DEBUG] ", > "[2018-07-13 20:48:36,106] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/756b7ef6-e58e-4f79-9df5-bb4a9fca6790", > "", > "[2018-07-13 20:48:36,111] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-07-13 20:48:36,112] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/756b7ef6-e58e-4f79-9df5-bb4a9fca6790.json < /var/lib/heat-config/deployed/756b7ef6-e58e-4f79-9df5-bb4a9fca6790.notify.json", > "[2018-07-13 20:48:36,573] (heat-config) [INFO] ", > "[2018-07-13 20:48:36,573] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:48:36,404 p=5867 u=mistral | TASK [Check-mode for Run deployment ComputeAllNodesValidationDeployment] ******* >2018-07-13 20:48:36,404 p=5867 u=mistral | Friday 13 July 2018 20:48:36 -0400 (0:00:00.073) 0:01:59.592 *********** >2018-07-13 20:48:36,421 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:36,440 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:48:36,440 p=5867 u=mistral | Friday 13 July 2018 20:48:36 -0400 (0:00:00.035) 0:01:59.628 *********** >2018-07-13 20:48:36,526 p=5867 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "0a1b0357-92aa-485e-9b9d-fedad034c50d"}, "changed": false} >2018-07-13 20:48:36,547 p=5867 u=mistral | TASK [Render deployment file for ComputeHostPrepDeployment] ******************** >2018-07-13 20:48:36,548 p=5867 u=mistral | Friday 13 July 2018 20:48:36 -0400 (0:00:00.107) 0:01:59.736 *********** >2018-07-13 20:48:37,253 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "f3a01f422f0a43e87049e588c87bd4b0182ebced", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeHostPrepDeployment-0a1b0357-92aa-485e-9b9d-fedad034c50d", "gid": 0, "group": "root", "md5sum": "816c5d0c05e75569a2ff4d07ca3bf9c1", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 34536, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529316.64-45198644833055/source", "state": "file", "uid": 0} >2018-07-13 20:48:37,275 p=5867 u=mistral | TASK [Check if deployed file exists for ComputeHostPrepDeployment] ************* >2018-07-13 20:48:37,275 p=5867 u=mistral | Friday 13 July 2018 20:48:37 -0400 (0:00:00.727) 0:02:00.463 *********** >2018-07-13 20:48:37,688 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:48:37,709 p=5867 u=mistral | TASK [Check previous deployment rc for ComputeHostPrepDeployment] ************** >2018-07-13 20:48:37,710 p=5867 u=mistral | Friday 13 July 2018 20:48:37 -0400 (0:00:00.434) 0:02:00.898 *********** >2018-07-13 20:48:37,728 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:37,750 p=5867 u=mistral | TASK [Remove deployed file for ComputeHostPrepDeployment when previous deployment failed] *** >2018-07-13 20:48:37,750 p=5867 u=mistral | Friday 13 July 2018 20:48:37 -0400 (0:00:00.040) 0:02:00.938 *********** >2018-07-13 20:48:37,770 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:37,791 p=5867 u=mistral | TASK [Force remove deployed file for ComputeHostPrepDeployment] **************** >2018-07-13 20:48:37,791 p=5867 u=mistral | Friday 13 July 2018 20:48:37 -0400 (0:00:00.040) 0:02:00.979 *********** >2018-07-13 20:48:37,810 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:37,829 p=5867 u=mistral | TASK [Run deployment ComputeHostPrepDeployment] ******************************** >2018-07-13 20:48:37,830 p=5867 u=mistral | Friday 13 July 2018 20:48:37 -0400 (0:00:00.038) 0:02:01.018 *********** >2018-07-13 20:48:55,195 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/0a1b0357-92aa-485e-9b9d-fedad034c50d.notify.json)", "delta": "0:00:16.944254", "end": "2018-07-13 20:48:55.458290", "rc": 0, "start": "2018-07-13 20:48:38.514036", "stderr": "[2018-07-13 20:48:38,539] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/0a1b0357-92aa-485e-9b9d-fedad034c50d.json\n[2018-07-13 20:48:55,065] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [Mount Nova NFS Share] ****************************************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/nova)\\nok: [localhost] => (item=/var/lib/libvirt)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [is Instance HA enabled] **************************************************\\nok: [localhost]\\n\\nTASK [prepare Instance HA script directory] ************************************\\nskipping: [localhost]\\n\\nTASK [install Instance HA script that runs nova-compute] ***********************\\nskipping: [localhost]\\n\\nTASK [Get list of instance HA compute nodes] ***********************************\\nskipping: [localhost]\\n\\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\\nskipping: [localhost]\\n\\nTASK [create libvirt persistent data directories] ******************************\\nok: [localhost] => (item=/etc/libvirt)\\nok: [localhost] => (item=/etc/libvirt/secrets)\\nok: [localhost] => (item=/etc/libvirt/qemu)\\nok: [localhost] => (item=/var/lib/libvirt)\\nchanged: [localhost] => (item=/var/log/containers/libvirt)\\n\\nTASK [ensure qemu group is present on the host] ********************************\\nok: [localhost]\\n\\nTASK [ensure qemu user is present on the host] *********************************\\nok: [localhost]\\n\\nTASK [create directory for vhost-user sockets with qemu ownership] *************\\nchanged: [localhost]\\n\\nTASK [check if libvirt is installed] *******************************************\\nchanged: [localhost]\\n\\nTASK [make sure libvirt services are disabled] *********************************\\nchanged: [localhost] => (item=libvirtd.service)\\nchanged: [localhost] => (item=virtlogd.socket)\\n\\nTASK [NTP settings] ************************************************************\\nok: [localhost]\\n\\nTASK [Install ntpdate] *********************************************************\\nskipping: [localhost]\\n\\nTASK [Ensure system is NTP time synced] ****************************************\\nchanged: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=22 changed=13 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \" [WARNING]: Consider using the yum, dnf or zypper module rather than running\\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\\ncan add warn=False to this command task or set command_warnings=False in\\nansible.cfg to get rid of this message.\\n\", \"deploy_status_code\": 0}\n[2018-07-13 20:48:55,065] (heat-config) [DEBUG] [2018-07-13 20:48:38,565] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/0a1b0357-92aa-485e-9b9d-fedad034c50d_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/0a1b0357-92aa-485e-9b9d-fedad034c50d_variables.json\n[2018-07-13 20:48:55,060] (heat-config) [INFO] Return code 0\n[2018-07-13 20:48:55,060] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [ceilometer logs readme] **************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/neutron)\n\nTASK [neutron logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}\n...ignoring\n\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\nok: [localhost]\n\nTASK [Stop and disable iscsid.socket service] **********************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [nova logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}\n...ignoring\n\nTASK [Mount Nova NFS Share] ****************************************************\nskipping: [localhost]\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/lib/nova)\nok: [localhost] => (item=/var/lib/libvirt)\n\nTASK [ensure ceph configurations exist] ****************************************\nchanged: [localhost]\n\nTASK [is Instance HA enabled] **************************************************\nok: [localhost]\n\nTASK [prepare Instance HA script directory] ************************************\nskipping: [localhost]\n\nTASK [install Instance HA script that runs nova-compute] ***********************\nskipping: [localhost]\n\nTASK [Get list of instance HA compute nodes] ***********************************\nskipping: [localhost]\n\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\nskipping: [localhost]\n\nTASK [create libvirt persistent data directories] ******************************\nok: [localhost] => (item=/etc/libvirt)\nok: [localhost] => (item=/etc/libvirt/secrets)\nok: [localhost] => (item=/etc/libvirt/qemu)\nok: [localhost] => (item=/var/lib/libvirt)\nchanged: [localhost] => (item=/var/log/containers/libvirt)\n\nTASK [ensure qemu group is present on the host] ********************************\nok: [localhost]\n\nTASK [ensure qemu user is present on the host] *********************************\nok: [localhost]\n\nTASK [create directory for vhost-user sockets with qemu ownership] *************\nchanged: [localhost]\n\nTASK [check if libvirt is installed] *******************************************\nchanged: [localhost]\n\nTASK [make sure libvirt services are disabled] *********************************\nchanged: [localhost] => (item=libvirtd.service)\nchanged: [localhost] => (item=virtlogd.socket)\n\nTASK [NTP settings] ************************************************************\nok: [localhost]\n\nTASK [Install ntpdate] *********************************************************\nskipping: [localhost]\n\nTASK [Ensure system is NTP time synced] ****************************************\nchanged: [localhost]\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=22 changed=13 unreachable=0 failed=0 \n\n\n[2018-07-13 20:48:55,060] (heat-config) [INFO] [WARNING]: Consider using the yum, dnf or zypper module rather than running\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\ncan add warn=False to this command task or set command_warnings=False in\nansible.cfg to get rid of this message.\n\n[2018-07-13 20:48:55,060] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/0a1b0357-92aa-485e-9b9d-fedad034c50d_playbook.yaml\n\n[2018-07-13 20:48:55,065] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-07-13 20:48:55,066] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0a1b0357-92aa-485e-9b9d-fedad034c50d.json < /var/lib/heat-config/deployed/0a1b0357-92aa-485e-9b9d-fedad034c50d.notify.json\n[2018-07-13 20:48:55,452] (heat-config) [INFO] \n[2018-07-13 20:48:55,452] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:48:38,539] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/0a1b0357-92aa-485e-9b9d-fedad034c50d.json", "[2018-07-13 20:48:55,065] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [Mount Nova NFS Share] ****************************************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/nova)\\nok: [localhost] => (item=/var/lib/libvirt)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [is Instance HA enabled] **************************************************\\nok: [localhost]\\n\\nTASK [prepare Instance HA script directory] ************************************\\nskipping: [localhost]\\n\\nTASK [install Instance HA script that runs nova-compute] ***********************\\nskipping: [localhost]\\n\\nTASK [Get list of instance HA compute nodes] ***********************************\\nskipping: [localhost]\\n\\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\\nskipping: [localhost]\\n\\nTASK [create libvirt persistent data directories] ******************************\\nok: [localhost] => (item=/etc/libvirt)\\nok: [localhost] => (item=/etc/libvirt/secrets)\\nok: [localhost] => (item=/etc/libvirt/qemu)\\nok: [localhost] => (item=/var/lib/libvirt)\\nchanged: [localhost] => (item=/var/log/containers/libvirt)\\n\\nTASK [ensure qemu group is present on the host] ********************************\\nok: [localhost]\\n\\nTASK [ensure qemu user is present on the host] *********************************\\nok: [localhost]\\n\\nTASK [create directory for vhost-user sockets with qemu ownership] *************\\nchanged: [localhost]\\n\\nTASK [check if libvirt is installed] *******************************************\\nchanged: [localhost]\\n\\nTASK [make sure libvirt services are disabled] *********************************\\nchanged: [localhost] => (item=libvirtd.service)\\nchanged: [localhost] => (item=virtlogd.socket)\\n\\nTASK [NTP settings] ************************************************************\\nok: [localhost]\\n\\nTASK [Install ntpdate] *********************************************************\\nskipping: [localhost]\\n\\nTASK [Ensure system is NTP time synced] ****************************************\\nchanged: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=22 changed=13 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \" [WARNING]: Consider using the yum, dnf or zypper module rather than running\\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\\ncan add warn=False to this command task or set command_warnings=False in\\nansible.cfg to get rid of this message.\\n\", \"deploy_status_code\": 0}", "[2018-07-13 20:48:55,065] (heat-config) [DEBUG] [2018-07-13 20:48:38,565] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/0a1b0357-92aa-485e-9b9d-fedad034c50d_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/0a1b0357-92aa-485e-9b9d-fedad034c50d_variables.json", "[2018-07-13 20:48:55,060] (heat-config) [INFO] Return code 0", "[2018-07-13 20:48:55,060] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [ceilometer logs readme] **************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/neutron)", "", "TASK [neutron logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", "...ignoring", "", "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", "ok: [localhost]", "", "TASK [Stop and disable iscsid.socket service] **********************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [nova logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", "...ignoring", "", "TASK [Mount Nova NFS Share] ****************************************************", "skipping: [localhost]", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/lib/nova)", "ok: [localhost] => (item=/var/lib/libvirt)", "", "TASK [ensure ceph configurations exist] ****************************************", "changed: [localhost]", "", "TASK [is Instance HA enabled] **************************************************", "ok: [localhost]", "", "TASK [prepare Instance HA script directory] ************************************", "skipping: [localhost]", "", "TASK [install Instance HA script that runs nova-compute] ***********************", "skipping: [localhost]", "", "TASK [Get list of instance HA compute nodes] ***********************************", "skipping: [localhost]", "", "TASK [If instance HA is enabled on the node activate the evacuation completed check] ***", "skipping: [localhost]", "", "TASK [create libvirt persistent data directories] ******************************", "ok: [localhost] => (item=/etc/libvirt)", "ok: [localhost] => (item=/etc/libvirt/secrets)", "ok: [localhost] => (item=/etc/libvirt/qemu)", "ok: [localhost] => (item=/var/lib/libvirt)", "changed: [localhost] => (item=/var/log/containers/libvirt)", "", "TASK [ensure qemu group is present on the host] ********************************", "ok: [localhost]", "", "TASK [ensure qemu user is present on the host] *********************************", "ok: [localhost]", "", "TASK [create directory for vhost-user sockets with qemu ownership] *************", "changed: [localhost]", "", "TASK [check if libvirt is installed] *******************************************", "changed: [localhost]", "", "TASK [make sure libvirt services are disabled] *********************************", "changed: [localhost] => (item=libvirtd.service)", "changed: [localhost] => (item=virtlogd.socket)", "", "TASK [NTP settings] ************************************************************", "ok: [localhost]", "", "TASK [Install ntpdate] *********************************************************", "skipping: [localhost]", "", "TASK [Ensure system is NTP time synced] ****************************************", "changed: [localhost]", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=22 changed=13 unreachable=0 failed=0 ", "", "", "[2018-07-13 20:48:55,060] (heat-config) [INFO] [WARNING]: Consider using the yum, dnf or zypper module rather than running", "rpm. If you need to use command because yum, dnf or zypper is insufficient you", "can add warn=False to this command task or set command_warnings=False in", "ansible.cfg to get rid of this message.", "", "[2018-07-13 20:48:55,060] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/0a1b0357-92aa-485e-9b9d-fedad034c50d_playbook.yaml", "", "[2018-07-13 20:48:55,065] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-07-13 20:48:55,066] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0a1b0357-92aa-485e-9b9d-fedad034c50d.json < /var/lib/heat-config/deployed/0a1b0357-92aa-485e-9b9d-fedad034c50d.notify.json", "[2018-07-13 20:48:55,452] (heat-config) [INFO] ", "[2018-07-13 20:48:55,452] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:48:55,219 p=5867 u=mistral | TASK [Output for ComputeHostPrepDeployment] ************************************ >2018-07-13 20:48:55,219 p=5867 u=mistral | Friday 13 July 2018 20:48:55 -0400 (0:00:17.389) 0:02:18.407 *********** >2018-07-13 20:48:55,275 p=5867 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:48:38,539] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/0a1b0357-92aa-485e-9b9d-fedad034c50d.json", > "[2018-07-13 20:48:55,065] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [Mount Nova NFS Share] ****************************************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/nova)\\nok: [localhost] => (item=/var/lib/libvirt)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [is Instance HA enabled] **************************************************\\nok: [localhost]\\n\\nTASK [prepare Instance HA script directory] ************************************\\nskipping: [localhost]\\n\\nTASK [install Instance HA script that runs nova-compute] ***********************\\nskipping: [localhost]\\n\\nTASK [Get list of instance HA compute nodes] ***********************************\\nskipping: [localhost]\\n\\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\\nskipping: [localhost]\\n\\nTASK [create libvirt persistent data directories] ******************************\\nok: [localhost] => (item=/etc/libvirt)\\nok: [localhost] => (item=/etc/libvirt/secrets)\\nok: [localhost] => (item=/etc/libvirt/qemu)\\nok: [localhost] => (item=/var/lib/libvirt)\\nchanged: [localhost] => (item=/var/log/containers/libvirt)\\n\\nTASK [ensure qemu group is present on the host] ********************************\\nok: [localhost]\\n\\nTASK [ensure qemu user is present on the host] *********************************\\nok: [localhost]\\n\\nTASK [create directory for vhost-user sockets with qemu ownership] *************\\nchanged: [localhost]\\n\\nTASK [check if libvirt is installed] *******************************************\\nchanged: [localhost]\\n\\nTASK [make sure libvirt services are disabled] *********************************\\nchanged: [localhost] => (item=libvirtd.service)\\nchanged: [localhost] => (item=virtlogd.socket)\\n\\nTASK [NTP settings] ************************************************************\\nok: [localhost]\\n\\nTASK [Install ntpdate] *********************************************************\\nskipping: [localhost]\\n\\nTASK [Ensure system is NTP time synced] ****************************************\\nchanged: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=22 changed=13 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \" [WARNING]: Consider using the yum, dnf or zypper module rather than running\\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\\ncan add warn=False to this command task or set command_warnings=False in\\nansible.cfg to get rid of this message.\\n\", \"deploy_status_code\": 0}", > "[2018-07-13 20:48:55,065] (heat-config) [DEBUG] [2018-07-13 20:48:38,565] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/0a1b0357-92aa-485e-9b9d-fedad034c50d_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/0a1b0357-92aa-485e-9b9d-fedad034c50d_variables.json", > "[2018-07-13 20:48:55,060] (heat-config) [INFO] Return code 0", > "[2018-07-13 20:48:55,060] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [ceilometer logs readme] **************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/neutron)", > "", > "TASK [neutron logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", > "...ignoring", > "", > "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", > "ok: [localhost]", > "", > "TASK [Stop and disable iscsid.socket service] **********************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [nova logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", > "...ignoring", > "", > "TASK [Mount Nova NFS Share] ****************************************************", > "skipping: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/lib/nova)", > "ok: [localhost] => (item=/var/lib/libvirt)", > "", > "TASK [ensure ceph configurations exist] ****************************************", > "changed: [localhost]", > "", > "TASK [is Instance HA enabled] **************************************************", > "ok: [localhost]", > "", > "TASK [prepare Instance HA script directory] ************************************", > "skipping: [localhost]", > "", > "TASK [install Instance HA script that runs nova-compute] ***********************", > "skipping: [localhost]", > "", > "TASK [Get list of instance HA compute nodes] ***********************************", > "skipping: [localhost]", > "", > "TASK [If instance HA is enabled on the node activate the evacuation completed check] ***", > "skipping: [localhost]", > "", > "TASK [create libvirt persistent data directories] ******************************", > "ok: [localhost] => (item=/etc/libvirt)", > "ok: [localhost] => (item=/etc/libvirt/secrets)", > "ok: [localhost] => (item=/etc/libvirt/qemu)", > "ok: [localhost] => (item=/var/lib/libvirt)", > "changed: [localhost] => (item=/var/log/containers/libvirt)", > "", > "TASK [ensure qemu group is present on the host] ********************************", > "ok: [localhost]", > "", > "TASK [ensure qemu user is present on the host] *********************************", > "ok: [localhost]", > "", > "TASK [create directory for vhost-user sockets with qemu ownership] *************", > "changed: [localhost]", > "", > "TASK [check if libvirt is installed] *******************************************", > "changed: [localhost]", > "", > "TASK [make sure libvirt services are disabled] *********************************", > "changed: [localhost] => (item=libvirtd.service)", > "changed: [localhost] => (item=virtlogd.socket)", > "", > "TASK [NTP settings] ************************************************************", > "ok: [localhost]", > "", > "TASK [Install ntpdate] *********************************************************", > "skipping: [localhost]", > "", > "TASK [Ensure system is NTP time synced] ****************************************", > "changed: [localhost]", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=22 changed=13 unreachable=0 failed=0 ", > "", > "", > "[2018-07-13 20:48:55,060] (heat-config) [INFO] [WARNING]: Consider using the yum, dnf or zypper module rather than running", > "rpm. If you need to use command because yum, dnf or zypper is insufficient you", > "can add warn=False to this command task or set command_warnings=False in", > "ansible.cfg to get rid of this message.", > "", > "[2018-07-13 20:48:55,060] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/0a1b0357-92aa-485e-9b9d-fedad034c50d_playbook.yaml", > "", > "[2018-07-13 20:48:55,065] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-07-13 20:48:55,066] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0a1b0357-92aa-485e-9b9d-fedad034c50d.json < /var/lib/heat-config/deployed/0a1b0357-92aa-485e-9b9d-fedad034c50d.notify.json", > "[2018-07-13 20:48:55,452] (heat-config) [INFO] ", > "[2018-07-13 20:48:55,452] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:48:55,298 p=5867 u=mistral | TASK [Check-mode for Run deployment ComputeHostPrepDeployment] ***************** >2018-07-13 20:48:55,298 p=5867 u=mistral | Friday 13 July 2018 20:48:55 -0400 (0:00:00.078) 0:02:18.486 *********** >2018-07-13 20:48:55,313 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:55,332 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:48:55,332 p=5867 u=mistral | Friday 13 July 2018 20:48:55 -0400 (0:00:00.033) 0:02:18.520 *********** >2018-07-13 20:48:55,386 p=5867 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "1477a601-e8ea-4137-ae89-6019abd8a8ee"}, "changed": false} >2018-07-13 20:48:55,409 p=5867 u=mistral | TASK [Render deployment file for ComputeArtifactsDeploy] *********************** >2018-07-13 20:48:55,409 p=5867 u=mistral | Friday 13 July 2018 20:48:55 -0400 (0:00:00.077) 0:02:18.597 *********** >2018-07-13 20:48:56,006 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "84cd8234a3bb584cdfdcd6eaa1864e7dd3d33809", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeArtifactsDeploy-1477a601-e8ea-4137-ae89-6019abd8a8ee", "gid": 0, "group": "root", "md5sum": "0db2c08d79ec30fa0a1a2d075b0e4c4f", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2015, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529335.46-171312737284690/source", "state": "file", "uid": 0} >2018-07-13 20:48:56,026 p=5867 u=mistral | TASK [Check if deployed file exists for ComputeArtifactsDeploy] **************** >2018-07-13 20:48:56,027 p=5867 u=mistral | Friday 13 July 2018 20:48:56 -0400 (0:00:00.617) 0:02:19.215 *********** >2018-07-13 20:48:56,338 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:48:56,359 p=5867 u=mistral | TASK [Check previous deployment rc for ComputeArtifactsDeploy] ***************** >2018-07-13 20:48:56,359 p=5867 u=mistral | Friday 13 July 2018 20:48:56 -0400 (0:00:00.332) 0:02:19.547 *********** >2018-07-13 20:48:56,378 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:56,399 p=5867 u=mistral | TASK [Remove deployed file for ComputeArtifactsDeploy when previous deployment failed] *** >2018-07-13 20:48:56,399 p=5867 u=mistral | Friday 13 July 2018 20:48:56 -0400 (0:00:00.039) 0:02:19.587 *********** >2018-07-13 20:48:56,416 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:56,437 p=5867 u=mistral | TASK [Force remove deployed file for ComputeArtifactsDeploy] ******************* >2018-07-13 20:48:56,437 p=5867 u=mistral | Friday 13 July 2018 20:48:56 -0400 (0:00:00.038) 0:02:19.625 *********** >2018-07-13 20:48:56,455 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:56,477 p=5867 u=mistral | TASK [Run deployment ComputeArtifactsDeploy] *********************************** >2018-07-13 20:48:56,477 p=5867 u=mistral | Friday 13 July 2018 20:48:56 -0400 (0:00:00.039) 0:02:19.665 *********** >2018-07-13 20:48:57,262 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/1477a601-e8ea-4137-ae89-6019abd8a8ee.notify.json)", "delta": "0:00:00.469409", "end": "2018-07-13 20:48:57.538216", "rc": 0, "start": "2018-07-13 20:48:57.068807", "stderr": "[2018-07-13 20:48:57,092] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/1477a601-e8ea-4137-ae89-6019abd8a8ee.json\n[2018-07-13 20:48:57,122] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-07-13 20:48:57,122] (heat-config) [DEBUG] [2018-07-13 20:48:57,113] (heat-config) [INFO] artifact_urls=\n[2018-07-13 20:48:57,113] (heat-config) [INFO] deploy_server_id=99a8e115-a0a1-4b89-8099-f4376943e467\n[2018-07-13 20:48:57,113] (heat-config) [INFO] deploy_action=CREATE\n[2018-07-13 20:48:57,113] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-nwaxeaw6ioho-ComputeArtifactsDeploy-vybw5ui5iozw-0-ea53h5rcv5wc/dde07dca-2418-4163-a6a2-0950e266ad9e\n[2018-07-13 20:48:57,113] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-07-13 20:48:57,113] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-07-13 20:48:57,114] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/1477a601-e8ea-4137-ae89-6019abd8a8ee\n[2018-07-13 20:48:57,118] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-07-13 20:48:57,119] (heat-config) [DEBUG] \n[2018-07-13 20:48:57,119] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/1477a601-e8ea-4137-ae89-6019abd8a8ee\n\n[2018-07-13 20:48:57,122] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-07-13 20:48:57,122] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1477a601-e8ea-4137-ae89-6019abd8a8ee.json < /var/lib/heat-config/deployed/1477a601-e8ea-4137-ae89-6019abd8a8ee.notify.json\n[2018-07-13 20:48:57,532] (heat-config) [INFO] \n[2018-07-13 20:48:57,532] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:48:57,092] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/1477a601-e8ea-4137-ae89-6019abd8a8ee.json", "[2018-07-13 20:48:57,122] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-07-13 20:48:57,122] (heat-config) [DEBUG] [2018-07-13 20:48:57,113] (heat-config) [INFO] artifact_urls=", "[2018-07-13 20:48:57,113] (heat-config) [INFO] deploy_server_id=99a8e115-a0a1-4b89-8099-f4376943e467", "[2018-07-13 20:48:57,113] (heat-config) [INFO] deploy_action=CREATE", "[2018-07-13 20:48:57,113] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-nwaxeaw6ioho-ComputeArtifactsDeploy-vybw5ui5iozw-0-ea53h5rcv5wc/dde07dca-2418-4163-a6a2-0950e266ad9e", "[2018-07-13 20:48:57,113] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-07-13 20:48:57,113] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-07-13 20:48:57,114] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/1477a601-e8ea-4137-ae89-6019abd8a8ee", "[2018-07-13 20:48:57,118] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-07-13 20:48:57,119] (heat-config) [DEBUG] ", "[2018-07-13 20:48:57,119] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/1477a601-e8ea-4137-ae89-6019abd8a8ee", "", "[2018-07-13 20:48:57,122] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-07-13 20:48:57,122] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1477a601-e8ea-4137-ae89-6019abd8a8ee.json < /var/lib/heat-config/deployed/1477a601-e8ea-4137-ae89-6019abd8a8ee.notify.json", "[2018-07-13 20:48:57,532] (heat-config) [INFO] ", "[2018-07-13 20:48:57,532] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:48:57,283 p=5867 u=mistral | TASK [Output for ComputeArtifactsDeploy] *************************************** >2018-07-13 20:48:57,283 p=5867 u=mistral | Friday 13 July 2018 20:48:57 -0400 (0:00:00.805) 0:02:20.471 *********** >2018-07-13 20:48:57,330 p=5867 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:48:57,092] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/1477a601-e8ea-4137-ae89-6019abd8a8ee.json", > "[2018-07-13 20:48:57,122] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-07-13 20:48:57,122] (heat-config) [DEBUG] [2018-07-13 20:48:57,113] (heat-config) [INFO] artifact_urls=", > "[2018-07-13 20:48:57,113] (heat-config) [INFO] deploy_server_id=99a8e115-a0a1-4b89-8099-f4376943e467", > "[2018-07-13 20:48:57,113] (heat-config) [INFO] deploy_action=CREATE", > "[2018-07-13 20:48:57,113] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-nwaxeaw6ioho-ComputeArtifactsDeploy-vybw5ui5iozw-0-ea53h5rcv5wc/dde07dca-2418-4163-a6a2-0950e266ad9e", > "[2018-07-13 20:48:57,113] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-07-13 20:48:57,113] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-07-13 20:48:57,114] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/1477a601-e8ea-4137-ae89-6019abd8a8ee", > "[2018-07-13 20:48:57,118] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-07-13 20:48:57,119] (heat-config) [DEBUG] ", > "[2018-07-13 20:48:57,119] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/1477a601-e8ea-4137-ae89-6019abd8a8ee", > "", > "[2018-07-13 20:48:57,122] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-07-13 20:48:57,122] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1477a601-e8ea-4137-ae89-6019abd8a8ee.json < /var/lib/heat-config/deployed/1477a601-e8ea-4137-ae89-6019abd8a8ee.notify.json", > "[2018-07-13 20:48:57,532] (heat-config) [INFO] ", > "[2018-07-13 20:48:57,532] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:48:57,350 p=5867 u=mistral | TASK [Check-mode for Run deployment ComputeArtifactsDeploy] ******************** >2018-07-13 20:48:57,351 p=5867 u=mistral | Friday 13 July 2018 20:48:57 -0400 (0:00:00.067) 0:02:20.539 *********** >2018-07-13 20:48:57,366 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:57,388 p=5867 u=mistral | TASK [include_tasks] *********************************************************** >2018-07-13 20:48:57,388 p=5867 u=mistral | Friday 13 July 2018 20:48:57 -0400 (0:00:00.037) 0:02:20.576 *********** >2018-07-13 20:48:57,473 p=5867 u=mistral | TASK [include_tasks] *********************************************************** >2018-07-13 20:48:57,473 p=5867 u=mistral | Friday 13 July 2018 20:48:57 -0400 (0:00:00.084) 0:02:20.661 *********** >2018-07-13 20:48:57,560 p=5867 u=mistral | TASK [include_tasks] *********************************************************** >2018-07-13 20:48:57,560 p=5867 u=mistral | Friday 13 July 2018 20:48:57 -0400 (0:00:00.087) 0:02:20.748 *********** >2018-07-13 20:48:57,772 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/CephStorage/deployments.yaml for ceph-0 >2018-07-13 20:48:57,780 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/CephStorage/deployments.yaml for ceph-0 >2018-07-13 20:48:57,787 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/CephStorage/deployments.yaml for ceph-0 >2018-07-13 20:48:57,795 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/CephStorage/deployments.yaml for ceph-0 >2018-07-13 20:48:57,804 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/CephStorage/deployments.yaml for ceph-0 >2018-07-13 20:48:57,812 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/CephStorage/deployments.yaml for ceph-0 >2018-07-13 20:48:57,820 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/CephStorage/deployments.yaml for ceph-0 >2018-07-13 20:48:57,829 p=5867 u=mistral | included: /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/CephStorage/deployments.yaml for ceph-0 >2018-07-13 20:48:57,899 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:48:57,899 p=5867 u=mistral | Friday 13 July 2018 20:48:57 -0400 (0:00:00.338) 0:02:21.087 *********** >2018-07-13 20:48:58,023 p=5867 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "6a0cf6fe-4419-4adf-8a64-6840890bd4d9"}, "changed": false} >2018-07-13 20:48:58,043 p=5867 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-07-13 20:48:58,043 p=5867 u=mistral | Friday 13 July 2018 20:48:58 -0400 (0:00:00.144) 0:02:21.231 *********** >2018-07-13 20:48:58,708 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "78885bfb169c61af481bdc5010bcf54110609744", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-6a0cf6fe-4419-4adf-8a64-6840890bd4d9", "gid": 0, "group": "root", "md5sum": "887f7609ce80e6741570834049119a3f", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 8777, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529338.17-110294543171880/source", "state": "file", "uid": 0} >2018-07-13 20:48:58,729 p=5867 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-07-13 20:48:58,729 p=5867 u=mistral | Friday 13 July 2018 20:48:58 -0400 (0:00:00.685) 0:02:21.917 *********** >2018-07-13 20:48:59,048 p=5867 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:48:59,068 p=5867 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-07-13 20:48:59,068 p=5867 u=mistral | Friday 13 July 2018 20:48:59 -0400 (0:00:00.339) 0:02:22.256 *********** >2018-07-13 20:48:59,087 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:59,105 p=5867 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-07-13 20:48:59,106 p=5867 u=mistral | Friday 13 July 2018 20:48:59 -0400 (0:00:00.037) 0:02:22.294 *********** >2018-07-13 20:48:59,124 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:59,143 p=5867 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-07-13 20:48:59,144 p=5867 u=mistral | Friday 13 July 2018 20:48:59 -0400 (0:00:00.037) 0:02:22.332 *********** >2018-07-13 20:48:59,161 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:48:59,181 p=5867 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-07-13 20:48:59,181 p=5867 u=mistral | Friday 13 July 2018 20:48:59 -0400 (0:00:00.037) 0:02:22.369 *********** >2018-07-13 20:49:14,600 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/6a0cf6fe-4419-4adf-8a64-6840890bd4d9.notify.json)", "delta": "0:00:15.084257", "end": "2018-07-13 20:49:14.547184", "rc": 0, "start": "2018-07-13 20:48:59.462927", "stderr": "[2018-07-13 20:48:59,487] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/6a0cf6fe-4419-4adf-8a64-6840890bd4d9.json\n[2018-07-13 20:49:14,107] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/07/13 08:48:59 PM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/07/13 08:48:59 PM] [INFO] Ifcfg net config provider created.\\n[2018/07/13 08:48:59 PM] [INFO] Not using any mapping file.\\n[2018/07/13 08:49:00 PM] [INFO] Finding active nics\\n[2018/07/13 08:49:00 PM] [INFO] eth1 is an embedded active nic\\n[2018/07/13 08:49:00 PM] [INFO] eth0 is an embedded active nic\\n[2018/07/13 08:49:00 PM] [INFO] eth2 is an embedded active nic\\n[2018/07/13 08:49:00 PM] [INFO] lo is not an active nic\\n[2018/07/13 08:49:00 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/07/13 08:49:00 PM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/07/13 08:49:00 PM] [INFO] nic3 mapped to: eth2\\n[2018/07/13 08:49:00 PM] [INFO] nic2 mapped to: eth1\\n[2018/07/13 08:49:00 PM] [INFO] nic1 mapped to: eth0\\n[2018/07/13 08:49:00 PM] [INFO] adding interface: eth0\\n[2018/07/13 08:49:00 PM] [INFO] adding custom route for interface: eth0\\n[2018/07/13 08:49:00 PM] [INFO] adding bridge: br-isolated\\n[2018/07/13 08:49:00 PM] [INFO] adding interface: eth1\\n[2018/07/13 08:49:00 PM] [INFO] adding vlan: vlan30\\n[2018/07/13 08:49:00 PM] [INFO] adding vlan: vlan40\\n[2018/07/13 08:49:00 PM] [INFO] applying network configs...\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan30\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan40\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: eth1\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: eth0\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan30\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan40\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on bridge: br-isolated\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/07/13 08:49:00 PM] [INFO] running ifup on bridge: br-isolated\\n[2018/07/13 08:49:00 PM] [INFO] running ifup on interface: eth1\\n[2018/07/13 08:49:00 PM] [INFO] running ifup on interface: eth0\\n[2018/07/13 08:49:05 PM] [INFO] running ifup on interface: vlan30\\n[2018/07/13 08:49:09 PM] [INFO] running ifup on interface: vlan40\\n[2018/07/13 08:49:13 PM] [INFO] running ifup on interface: vlan30\\n[2018/07/13 08:49:13 PM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-07-13 20:49:14,107] (heat-config) [DEBUG] [2018-07-13 20:48:59,510] (heat-config) [INFO] interface_name=nic1\n[2018-07-13 20:48:59,510] (heat-config) [INFO] bridge_name=br-ex\n[2018-07-13 20:48:59,510] (heat-config) [INFO] deploy_server_id=822c871f-59f2-416c-a0da-a7612346ffb2\n[2018-07-13 20:48:59,510] (heat-config) [INFO] deploy_action=CREATE\n[2018-07-13 20:48:59,510] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-vjam2v4ncijc-0-luvniuhyxb7x-NetworkDeployment-2dtr2hgnlrdw-TripleOSoftwareDeployment-ynebhk3ui4zl/8e5393c6-01aa-492c-bc5d-98c91976a118\n[2018-07-13 20:48:59,510] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-07-13 20:48:59,510] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-07-13 20:48:59,510] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/6a0cf6fe-4419-4adf-8a64-6840890bd4d9\n[2018-07-13 20:49:14,103] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS\n\n[2018-07-13 20:49:14,103] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/07/13 08:48:59 PM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/07/13 08:48:59 PM] [INFO] Ifcfg net config provider created.\n[2018/07/13 08:48:59 PM] [INFO] Not using any mapping file.\n[2018/07/13 08:49:00 PM] [INFO] Finding active nics\n[2018/07/13 08:49:00 PM] [INFO] eth1 is an embedded active nic\n[2018/07/13 08:49:00 PM] [INFO] eth0 is an embedded active nic\n[2018/07/13 08:49:00 PM] [INFO] eth2 is an embedded active nic\n[2018/07/13 08:49:00 PM] [INFO] lo is not an active nic\n[2018/07/13 08:49:00 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/07/13 08:49:00 PM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/07/13 08:49:00 PM] [INFO] nic3 mapped to: eth2\n[2018/07/13 08:49:00 PM] [INFO] nic2 mapped to: eth1\n[2018/07/13 08:49:00 PM] [INFO] nic1 mapped to: eth0\n[2018/07/13 08:49:00 PM] [INFO] adding interface: eth0\n[2018/07/13 08:49:00 PM] [INFO] adding custom route for interface: eth0\n[2018/07/13 08:49:00 PM] [INFO] adding bridge: br-isolated\n[2018/07/13 08:49:00 PM] [INFO] adding interface: eth1\n[2018/07/13 08:49:00 PM] [INFO] adding vlan: vlan30\n[2018/07/13 08:49:00 PM] [INFO] adding vlan: vlan40\n[2018/07/13 08:49:00 PM] [INFO] applying network configs...\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan30\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan40\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: eth1\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: eth0\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan30\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan40\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on bridge: br-isolated\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/07/13 08:49:00 PM] [INFO] running ifup on bridge: br-isolated\n[2018/07/13 08:49:00 PM] [INFO] running ifup on interface: eth1\n[2018/07/13 08:49:00 PM] [INFO] running ifup on interface: eth0\n[2018/07/13 08:49:05 PM] [INFO] running ifup on interface: vlan30\n[2018/07/13 08:49:09 PM] [INFO] running ifup on interface: vlan40\n[2018/07/13 08:49:13 PM] [INFO] running ifup on interface: vlan30\n[2018/07/13 08:49:13 PM] [INFO] running ifup on interface: vlan40\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.3\n++ '[' -n 192.168.24.3 ']'\n++ break\n++ echo 192.168.24.3\n+ local METADATA_IP=192.168.24.3\n+ '[' -n 192.168.24.3 ']'\n+ is_local_ip 192.168.24.3\n+ local IP_TO_CHECK=192.168.24.3\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.3/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\n+ _ping=ping\n+ [[ 192.168.24.3 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.3\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-07-13 20:49:14,103] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/6a0cf6fe-4419-4adf-8a64-6840890bd4d9\n\n[2018-07-13 20:49:14,107] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-07-13 20:49:14,108] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/6a0cf6fe-4419-4adf-8a64-6840890bd4d9.json < /var/lib/heat-config/deployed/6a0cf6fe-4419-4adf-8a64-6840890bd4d9.notify.json\n[2018-07-13 20:49:14,540] (heat-config) [INFO] \n[2018-07-13 20:49:14,540] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:48:59,487] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/6a0cf6fe-4419-4adf-8a64-6840890bd4d9.json", "[2018-07-13 20:49:14,107] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/07/13 08:48:59 PM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/07/13 08:48:59 PM] [INFO] Ifcfg net config provider created.\\n[2018/07/13 08:48:59 PM] [INFO] Not using any mapping file.\\n[2018/07/13 08:49:00 PM] [INFO] Finding active nics\\n[2018/07/13 08:49:00 PM] [INFO] eth1 is an embedded active nic\\n[2018/07/13 08:49:00 PM] [INFO] eth0 is an embedded active nic\\n[2018/07/13 08:49:00 PM] [INFO] eth2 is an embedded active nic\\n[2018/07/13 08:49:00 PM] [INFO] lo is not an active nic\\n[2018/07/13 08:49:00 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/07/13 08:49:00 PM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/07/13 08:49:00 PM] [INFO] nic3 mapped to: eth2\\n[2018/07/13 08:49:00 PM] [INFO] nic2 mapped to: eth1\\n[2018/07/13 08:49:00 PM] [INFO] nic1 mapped to: eth0\\n[2018/07/13 08:49:00 PM] [INFO] adding interface: eth0\\n[2018/07/13 08:49:00 PM] [INFO] adding custom route for interface: eth0\\n[2018/07/13 08:49:00 PM] [INFO] adding bridge: br-isolated\\n[2018/07/13 08:49:00 PM] [INFO] adding interface: eth1\\n[2018/07/13 08:49:00 PM] [INFO] adding vlan: vlan30\\n[2018/07/13 08:49:00 PM] [INFO] adding vlan: vlan40\\n[2018/07/13 08:49:00 PM] [INFO] applying network configs...\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan30\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan40\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: eth1\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: eth0\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan30\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan40\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on bridge: br-isolated\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/07/13 08:49:00 PM] [INFO] running ifup on bridge: br-isolated\\n[2018/07/13 08:49:00 PM] [INFO] running ifup on interface: eth1\\n[2018/07/13 08:49:00 PM] [INFO] running ifup on interface: eth0\\n[2018/07/13 08:49:05 PM] [INFO] running ifup on interface: vlan30\\n[2018/07/13 08:49:09 PM] [INFO] running ifup on interface: vlan40\\n[2018/07/13 08:49:13 PM] [INFO] running ifup on interface: vlan30\\n[2018/07/13 08:49:13 PM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-07-13 20:49:14,107] (heat-config) [DEBUG] [2018-07-13 20:48:59,510] (heat-config) [INFO] interface_name=nic1", "[2018-07-13 20:48:59,510] (heat-config) [INFO] bridge_name=br-ex", "[2018-07-13 20:48:59,510] (heat-config) [INFO] deploy_server_id=822c871f-59f2-416c-a0da-a7612346ffb2", "[2018-07-13 20:48:59,510] (heat-config) [INFO] deploy_action=CREATE", "[2018-07-13 20:48:59,510] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-vjam2v4ncijc-0-luvniuhyxb7x-NetworkDeployment-2dtr2hgnlrdw-TripleOSoftwareDeployment-ynebhk3ui4zl/8e5393c6-01aa-492c-bc5d-98c91976a118", "[2018-07-13 20:48:59,510] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-07-13 20:48:59,510] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-07-13 20:48:59,510] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/6a0cf6fe-4419-4adf-8a64-6840890bd4d9", "[2018-07-13 20:49:14,103] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", "", "[2018-07-13 20:49:14,103] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/07/13 08:48:59 PM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/07/13 08:48:59 PM] [INFO] Ifcfg net config provider created.", "[2018/07/13 08:48:59 PM] [INFO] Not using any mapping file.", "[2018/07/13 08:49:00 PM] [INFO] Finding active nics", "[2018/07/13 08:49:00 PM] [INFO] eth1 is an embedded active nic", "[2018/07/13 08:49:00 PM] [INFO] eth0 is an embedded active nic", "[2018/07/13 08:49:00 PM] [INFO] eth2 is an embedded active nic", "[2018/07/13 08:49:00 PM] [INFO] lo is not an active nic", "[2018/07/13 08:49:00 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/07/13 08:49:00 PM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/07/13 08:49:00 PM] [INFO] nic3 mapped to: eth2", "[2018/07/13 08:49:00 PM] [INFO] nic2 mapped to: eth1", "[2018/07/13 08:49:00 PM] [INFO] nic1 mapped to: eth0", "[2018/07/13 08:49:00 PM] [INFO] adding interface: eth0", "[2018/07/13 08:49:00 PM] [INFO] adding custom route for interface: eth0", "[2018/07/13 08:49:00 PM] [INFO] adding bridge: br-isolated", "[2018/07/13 08:49:00 PM] [INFO] adding interface: eth1", "[2018/07/13 08:49:00 PM] [INFO] adding vlan: vlan30", "[2018/07/13 08:49:00 PM] [INFO] adding vlan: vlan40", "[2018/07/13 08:49:00 PM] [INFO] applying network configs...", "[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan30", "[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan40", "[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: eth1", "[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: eth0", "[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan30", "[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan40", "[2018/07/13 08:49:00 PM] [INFO] running ifdown on bridge: br-isolated", "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/07/13 08:49:00 PM] [INFO] running ifup on bridge: br-isolated", "[2018/07/13 08:49:00 PM] [INFO] running ifup on interface: eth1", "[2018/07/13 08:49:00 PM] [INFO] running ifup on interface: eth0", "[2018/07/13 08:49:05 PM] [INFO] running ifup on interface: vlan30", "[2018/07/13 08:49:09 PM] [INFO] running ifup on interface: vlan40", "[2018/07/13 08:49:13 PM] [INFO] running ifup on interface: vlan30", "[2018/07/13 08:49:13 PM] [INFO] running ifup on interface: vlan40", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.3", "++ '[' -n 192.168.24.3 ']'", "++ break", "++ echo 192.168.24.3", "+ local METADATA_IP=192.168.24.3", "+ '[' -n 192.168.24.3 ']'", "+ is_local_ip 192.168.24.3", "+ local IP_TO_CHECK=192.168.24.3", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.3/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", "+ _ping=ping", "+ [[ 192.168.24.3 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.3", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-07-13 20:49:14,103] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/6a0cf6fe-4419-4adf-8a64-6840890bd4d9", "", "[2018-07-13 20:49:14,107] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-07-13 20:49:14,108] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/6a0cf6fe-4419-4adf-8a64-6840890bd4d9.json < /var/lib/heat-config/deployed/6a0cf6fe-4419-4adf-8a64-6840890bd4d9.notify.json", "[2018-07-13 20:49:14,540] (heat-config) [INFO] ", "[2018-07-13 20:49:14,540] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:49:14,625 p=5867 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-07-13 20:49:14,625 p=5867 u=mistral | Friday 13 July 2018 20:49:14 -0400 (0:00:15.444) 0:02:37.813 *********** >2018-07-13 20:49:14,684 p=5867 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:48:59,487] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/6a0cf6fe-4419-4adf-8a64-6840890bd4d9.json", > "[2018-07-13 20:49:14,107] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/07/13 08:48:59 PM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/07/13 08:48:59 PM] [INFO] Ifcfg net config provider created.\\n[2018/07/13 08:48:59 PM] [INFO] Not using any mapping file.\\n[2018/07/13 08:49:00 PM] [INFO] Finding active nics\\n[2018/07/13 08:49:00 PM] [INFO] eth1 is an embedded active nic\\n[2018/07/13 08:49:00 PM] [INFO] eth0 is an embedded active nic\\n[2018/07/13 08:49:00 PM] [INFO] eth2 is an embedded active nic\\n[2018/07/13 08:49:00 PM] [INFO] lo is not an active nic\\n[2018/07/13 08:49:00 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/07/13 08:49:00 PM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/07/13 08:49:00 PM] [INFO] nic3 mapped to: eth2\\n[2018/07/13 08:49:00 PM] [INFO] nic2 mapped to: eth1\\n[2018/07/13 08:49:00 PM] [INFO] nic1 mapped to: eth0\\n[2018/07/13 08:49:00 PM] [INFO] adding interface: eth0\\n[2018/07/13 08:49:00 PM] [INFO] adding custom route for interface: eth0\\n[2018/07/13 08:49:00 PM] [INFO] adding bridge: br-isolated\\n[2018/07/13 08:49:00 PM] [INFO] adding interface: eth1\\n[2018/07/13 08:49:00 PM] [INFO] adding vlan: vlan30\\n[2018/07/13 08:49:00 PM] [INFO] adding vlan: vlan40\\n[2018/07/13 08:49:00 PM] [INFO] applying network configs...\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan30\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan40\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: eth1\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: eth0\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan30\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan40\\n[2018/07/13 08:49:00 PM] [INFO] running ifdown on bridge: br-isolated\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/07/13 08:49:00 PM] [INFO] running ifup on bridge: br-isolated\\n[2018/07/13 08:49:00 PM] [INFO] running ifup on interface: eth1\\n[2018/07/13 08:49:00 PM] [INFO] running ifup on interface: eth0\\n[2018/07/13 08:49:05 PM] [INFO] running ifup on interface: vlan30\\n[2018/07/13 08:49:09 PM] [INFO] running ifup on interface: vlan40\\n[2018/07/13 08:49:13 PM] [INFO] running ifup on interface: vlan30\\n[2018/07/13 08:49:13 PM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-07-13 20:49:14,107] (heat-config) [DEBUG] [2018-07-13 20:48:59,510] (heat-config) [INFO] interface_name=nic1", > "[2018-07-13 20:48:59,510] (heat-config) [INFO] bridge_name=br-ex", > "[2018-07-13 20:48:59,510] (heat-config) [INFO] deploy_server_id=822c871f-59f2-416c-a0da-a7612346ffb2", > "[2018-07-13 20:48:59,510] (heat-config) [INFO] deploy_action=CREATE", > "[2018-07-13 20:48:59,510] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-vjam2v4ncijc-0-luvniuhyxb7x-NetworkDeployment-2dtr2hgnlrdw-TripleOSoftwareDeployment-ynebhk3ui4zl/8e5393c6-01aa-492c-bc5d-98c91976a118", > "[2018-07-13 20:48:59,510] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-07-13 20:48:59,510] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-07-13 20:48:59,510] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/6a0cf6fe-4419-4adf-8a64-6840890bd4d9", > "[2018-07-13 20:49:14,103] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", > "", > "[2018-07-13 20:49:14,103] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/07/13 08:48:59 PM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/07/13 08:48:59 PM] [INFO] Ifcfg net config provider created.", > "[2018/07/13 08:48:59 PM] [INFO] Not using any mapping file.", > "[2018/07/13 08:49:00 PM] [INFO] Finding active nics", > "[2018/07/13 08:49:00 PM] [INFO] eth1 is an embedded active nic", > "[2018/07/13 08:49:00 PM] [INFO] eth0 is an embedded active nic", > "[2018/07/13 08:49:00 PM] [INFO] eth2 is an embedded active nic", > "[2018/07/13 08:49:00 PM] [INFO] lo is not an active nic", > "[2018/07/13 08:49:00 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/07/13 08:49:00 PM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/07/13 08:49:00 PM] [INFO] nic3 mapped to: eth2", > "[2018/07/13 08:49:00 PM] [INFO] nic2 mapped to: eth1", > "[2018/07/13 08:49:00 PM] [INFO] nic1 mapped to: eth0", > "[2018/07/13 08:49:00 PM] [INFO] adding interface: eth0", > "[2018/07/13 08:49:00 PM] [INFO] adding custom route for interface: eth0", > "[2018/07/13 08:49:00 PM] [INFO] adding bridge: br-isolated", > "[2018/07/13 08:49:00 PM] [INFO] adding interface: eth1", > "[2018/07/13 08:49:00 PM] [INFO] adding vlan: vlan30", > "[2018/07/13 08:49:00 PM] [INFO] adding vlan: vlan40", > "[2018/07/13 08:49:00 PM] [INFO] applying network configs...", > "[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan30", > "[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan40", > "[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: eth1", > "[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: eth0", > "[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan30", > "[2018/07/13 08:49:00 PM] [INFO] running ifdown on interface: vlan40", > "[2018/07/13 08:49:00 PM] [INFO] running ifdown on bridge: br-isolated", > "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", > "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", > "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", > "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/07/13 08:49:00 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/07/13 08:49:00 PM] [INFO] running ifup on bridge: br-isolated", > "[2018/07/13 08:49:00 PM] [INFO] running ifup on interface: eth1", > "[2018/07/13 08:49:00 PM] [INFO] running ifup on interface: eth0", > "[2018/07/13 08:49:05 PM] [INFO] running ifup on interface: vlan30", > "[2018/07/13 08:49:09 PM] [INFO] running ifup on interface: vlan40", > "[2018/07/13 08:49:13 PM] [INFO] running ifup on interface: vlan30", > "[2018/07/13 08:49:13 PM] [INFO] running ifup on interface: vlan40", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.3", > "++ '[' -n 192.168.24.3 ']'", > "++ break", > "++ echo 192.168.24.3", > "+ local METADATA_IP=192.168.24.3", > "+ '[' -n 192.168.24.3 ']'", > "+ is_local_ip 192.168.24.3", > "+ local IP_TO_CHECK=192.168.24.3", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.3/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", > "+ _ping=ping", > "+ [[ 192.168.24.3 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.3", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-07-13 20:49:14,103] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/6a0cf6fe-4419-4adf-8a64-6840890bd4d9", > "", > "[2018-07-13 20:49:14,107] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-07-13 20:49:14,108] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/6a0cf6fe-4419-4adf-8a64-6840890bd4d9.json < /var/lib/heat-config/deployed/6a0cf6fe-4419-4adf-8a64-6840890bd4d9.notify.json", > "[2018-07-13 20:49:14,540] (heat-config) [INFO] ", > "[2018-07-13 20:49:14,540] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:49:14,708 p=5867 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment] ************************* >2018-07-13 20:49:14,708 p=5867 u=mistral | Friday 13 July 2018 20:49:14 -0400 (0:00:00.082) 0:02:37.896 *********** >2018-07-13 20:49:14,724 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:14,742 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:49:14,743 p=5867 u=mistral | Friday 13 July 2018 20:49:14 -0400 (0:00:00.034) 0:02:37.931 *********** >2018-07-13 20:49:14,794 p=5867 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "2d2b5bae-228b-4e37-b65e-912690bd5f64"}, "changed": false} >2018-07-13 20:49:14,815 p=5867 u=mistral | TASK [Render deployment file for CephStorageUpgradeInitDeployment] ************* >2018-07-13 20:49:14,815 p=5867 u=mistral | Friday 13 July 2018 20:49:14 -0400 (0:00:00.072) 0:02:38.003 *********** >2018-07-13 20:49:15,434 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "162a9b5b3dc6e87654463212eddda27da4d339fc", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageUpgradeInitDeployment-2d2b5bae-228b-4e37-b65e-912690bd5f64", "gid": 0, "group": "root", "md5sum": "d5c3e7154e2772a07b5201fd65dd0fa6", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1186, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529354.87-158089079488343/source", "state": "file", "uid": 0} >2018-07-13 20:49:15,456 p=5867 u=mistral | TASK [Check if deployed file exists for CephStorageUpgradeInitDeployment] ****** >2018-07-13 20:49:15,456 p=5867 u=mistral | Friday 13 July 2018 20:49:15 -0400 (0:00:00.640) 0:02:38.644 *********** >2018-07-13 20:49:15,774 p=5867 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:49:15,795 p=5867 u=mistral | TASK [Check previous deployment rc for CephStorageUpgradeInitDeployment] ******* >2018-07-13 20:49:15,795 p=5867 u=mistral | Friday 13 July 2018 20:49:15 -0400 (0:00:00.338) 0:02:38.983 *********** >2018-07-13 20:49:15,813 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:15,833 p=5867 u=mistral | TASK [Remove deployed file for CephStorageUpgradeInitDeployment when previous deployment failed] *** >2018-07-13 20:49:15,833 p=5867 u=mistral | Friday 13 July 2018 20:49:15 -0400 (0:00:00.038) 0:02:39.021 *********** >2018-07-13 20:49:15,851 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:15,872 p=5867 u=mistral | TASK [Force remove deployed file for CephStorageUpgradeInitDeployment] ********* >2018-07-13 20:49:15,872 p=5867 u=mistral | Friday 13 July 2018 20:49:15 -0400 (0:00:00.038) 0:02:39.060 *********** >2018-07-13 20:49:15,891 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:15,911 p=5867 u=mistral | TASK [Run deployment CephStorageUpgradeInitDeployment] ************************* >2018-07-13 20:49:15,911 p=5867 u=mistral | Friday 13 July 2018 20:49:15 -0400 (0:00:00.039) 0:02:39.099 *********** >2018-07-13 20:49:16,720 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/2d2b5bae-228b-4e37-b65e-912690bd5f64.notify.json)", "delta": "0:00:00.482518", "end": "2018-07-13 20:49:16.675681", "rc": 0, "start": "2018-07-13 20:49:16.193163", "stderr": "[2018-07-13 20:49:16,218] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/2d2b5bae-228b-4e37-b65e-912690bd5f64.json\n[2018-07-13 20:49:16,245] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-07-13 20:49:16,245] (heat-config) [DEBUG] [2018-07-13 20:49:16,238] (heat-config) [INFO] deploy_server_id=822c871f-59f2-416c-a0da-a7612346ffb2\n[2018-07-13 20:49:16,239] (heat-config) [INFO] deploy_action=CREATE\n[2018-07-13 20:49:16,239] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-vjam2v4ncijc-0-luvniuhyxb7x-CephStorageUpgradeInitDeployment-5dmfn3eblqgn/059e407b-9040-462f-a037-238023270cdc\n[2018-07-13 20:49:16,239] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-07-13 20:49:16,239] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-07-13 20:49:16,239] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/2d2b5bae-228b-4e37-b65e-912690bd5f64\n[2018-07-13 20:49:16,242] (heat-config) [INFO] \n[2018-07-13 20:49:16,242] (heat-config) [DEBUG] \n[2018-07-13 20:49:16,242] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/2d2b5bae-228b-4e37-b65e-912690bd5f64\n\n[2018-07-13 20:49:16,245] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-07-13 20:49:16,245] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/2d2b5bae-228b-4e37-b65e-912690bd5f64.json < /var/lib/heat-config/deployed/2d2b5bae-228b-4e37-b65e-912690bd5f64.notify.json\n[2018-07-13 20:49:16,670] (heat-config) [INFO] \n[2018-07-13 20:49:16,670] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:49:16,218] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/2d2b5bae-228b-4e37-b65e-912690bd5f64.json", "[2018-07-13 20:49:16,245] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-07-13 20:49:16,245] (heat-config) [DEBUG] [2018-07-13 20:49:16,238] (heat-config) [INFO] deploy_server_id=822c871f-59f2-416c-a0da-a7612346ffb2", "[2018-07-13 20:49:16,239] (heat-config) [INFO] deploy_action=CREATE", "[2018-07-13 20:49:16,239] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-vjam2v4ncijc-0-luvniuhyxb7x-CephStorageUpgradeInitDeployment-5dmfn3eblqgn/059e407b-9040-462f-a037-238023270cdc", "[2018-07-13 20:49:16,239] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-07-13 20:49:16,239] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-07-13 20:49:16,239] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/2d2b5bae-228b-4e37-b65e-912690bd5f64", "[2018-07-13 20:49:16,242] (heat-config) [INFO] ", "[2018-07-13 20:49:16,242] (heat-config) [DEBUG] ", "[2018-07-13 20:49:16,242] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/2d2b5bae-228b-4e37-b65e-912690bd5f64", "", "[2018-07-13 20:49:16,245] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-07-13 20:49:16,245] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/2d2b5bae-228b-4e37-b65e-912690bd5f64.json < /var/lib/heat-config/deployed/2d2b5bae-228b-4e37-b65e-912690bd5f64.notify.json", "[2018-07-13 20:49:16,670] (heat-config) [INFO] ", "[2018-07-13 20:49:16,670] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:49:16,744 p=5867 u=mistral | TASK [Output for CephStorageUpgradeInitDeployment] ***************************** >2018-07-13 20:49:16,744 p=5867 u=mistral | Friday 13 July 2018 20:49:16 -0400 (0:00:00.832) 0:02:39.932 *********** >2018-07-13 20:49:16,797 p=5867 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:49:16,218] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/2d2b5bae-228b-4e37-b65e-912690bd5f64.json", > "[2018-07-13 20:49:16,245] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-07-13 20:49:16,245] (heat-config) [DEBUG] [2018-07-13 20:49:16,238] (heat-config) [INFO] deploy_server_id=822c871f-59f2-416c-a0da-a7612346ffb2", > "[2018-07-13 20:49:16,239] (heat-config) [INFO] deploy_action=CREATE", > "[2018-07-13 20:49:16,239] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-vjam2v4ncijc-0-luvniuhyxb7x-CephStorageUpgradeInitDeployment-5dmfn3eblqgn/059e407b-9040-462f-a037-238023270cdc", > "[2018-07-13 20:49:16,239] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-07-13 20:49:16,239] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-07-13 20:49:16,239] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/2d2b5bae-228b-4e37-b65e-912690bd5f64", > "[2018-07-13 20:49:16,242] (heat-config) [INFO] ", > "[2018-07-13 20:49:16,242] (heat-config) [DEBUG] ", > "[2018-07-13 20:49:16,242] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/2d2b5bae-228b-4e37-b65e-912690bd5f64", > "", > "[2018-07-13 20:49:16,245] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-07-13 20:49:16,245] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/2d2b5bae-228b-4e37-b65e-912690bd5f64.json < /var/lib/heat-config/deployed/2d2b5bae-228b-4e37-b65e-912690bd5f64.notify.json", > "[2018-07-13 20:49:16,670] (heat-config) [INFO] ", > "[2018-07-13 20:49:16,670] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:49:16,820 p=5867 u=mistral | TASK [Check-mode for Run deployment CephStorageUpgradeInitDeployment] ********** >2018-07-13 20:49:16,821 p=5867 u=mistral | Friday 13 July 2018 20:49:16 -0400 (0:00:00.076) 0:02:40.009 *********** >2018-07-13 20:49:16,836 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:16,856 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:49:16,857 p=5867 u=mistral | Friday 13 July 2018 20:49:16 -0400 (0:00:00.036) 0:02:40.045 *********** >2018-07-13 20:49:16,945 p=5867 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "bd434bbe-2870-46bb-b351-f217a67c0826"}, "changed": false} >2018-07-13 20:49:16,966 p=5867 u=mistral | TASK [Render deployment file for CephStorageDeployment] ************************ >2018-07-13 20:49:16,966 p=5867 u=mistral | Friday 13 July 2018 20:49:16 -0400 (0:00:00.109) 0:02:40.154 *********** >2018-07-13 20:49:17,685 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "3388f71248747266f8c689943b493186c28baec4", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageDeployment-bd434bbe-2870-46bb-b351-f217a67c0826", "gid": 0, "group": "root", "md5sum": "9d1faac2760dd2ed3a9cf5da9766ff3e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9106, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529357.13-125020403165703/source", "state": "file", "uid": 0} >2018-07-13 20:49:17,707 p=5867 u=mistral | TASK [Check if deployed file exists for CephStorageDeployment] ***************** >2018-07-13 20:49:17,707 p=5867 u=mistral | Friday 13 July 2018 20:49:17 -0400 (0:00:00.740) 0:02:40.895 *********** >2018-07-13 20:49:18,093 p=5867 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:49:18,115 p=5867 u=mistral | TASK [Check previous deployment rc for CephStorageDeployment] ****************** >2018-07-13 20:49:18,115 p=5867 u=mistral | Friday 13 July 2018 20:49:18 -0400 (0:00:00.408) 0:02:41.303 *********** >2018-07-13 20:49:18,134 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:18,155 p=5867 u=mistral | TASK [Remove deployed file for CephStorageDeployment when previous deployment failed] *** >2018-07-13 20:49:18,155 p=5867 u=mistral | Friday 13 July 2018 20:49:18 -0400 (0:00:00.039) 0:02:41.343 *********** >2018-07-13 20:49:18,174 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:18,195 p=5867 u=mistral | TASK [Force remove deployed file for CephStorageDeployment] ******************** >2018-07-13 20:49:18,195 p=5867 u=mistral | Friday 13 July 2018 20:49:18 -0400 (0:00:00.040) 0:02:41.383 *********** >2018-07-13 20:49:18,212 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:18,233 p=5867 u=mistral | TASK [Run deployment CephStorageDeployment] ************************************ >2018-07-13 20:49:18,233 p=5867 u=mistral | Friday 13 July 2018 20:49:18 -0400 (0:00:00.037) 0:02:41.421 *********** >2018-07-13 20:49:19,199 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/bd434bbe-2870-46bb-b351-f217a67c0826.notify.json)", "delta": "0:00:00.579865", "end": "2018-07-13 20:49:19.158261", "rc": 0, "start": "2018-07-13 20:49:18.578396", "stderr": "[2018-07-13 20:49:18,602] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/bd434bbe-2870-46bb-b351-f217a67c0826.json\n[2018-07-13 20:49:18,733] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-07-13 20:49:18,733] (heat-config) [DEBUG] \n[2018-07-13 20:49:18,733] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-07-13 20:49:18,733] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/bd434bbe-2870-46bb-b351-f217a67c0826.json < /var/lib/heat-config/deployed/bd434bbe-2870-46bb-b351-f217a67c0826.notify.json\n[2018-07-13 20:49:19,152] (heat-config) [INFO] \n[2018-07-13 20:49:19,152] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:49:18,602] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/bd434bbe-2870-46bb-b351-f217a67c0826.json", "[2018-07-13 20:49:18,733] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-07-13 20:49:18,733] (heat-config) [DEBUG] ", "[2018-07-13 20:49:18,733] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-07-13 20:49:18,733] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/bd434bbe-2870-46bb-b351-f217a67c0826.json < /var/lib/heat-config/deployed/bd434bbe-2870-46bb-b351-f217a67c0826.notify.json", "[2018-07-13 20:49:19,152] (heat-config) [INFO] ", "[2018-07-13 20:49:19,152] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:49:19,221 p=5867 u=mistral | TASK [Output for CephStorageDeployment] **************************************** >2018-07-13 20:49:19,222 p=5867 u=mistral | Friday 13 July 2018 20:49:19 -0400 (0:00:00.988) 0:02:42.410 *********** >2018-07-13 20:49:19,331 p=5867 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:49:18,602] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/bd434bbe-2870-46bb-b351-f217a67c0826.json", > "[2018-07-13 20:49:18,733] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-07-13 20:49:18,733] (heat-config) [DEBUG] ", > "[2018-07-13 20:49:18,733] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-07-13 20:49:18,733] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/bd434bbe-2870-46bb-b351-f217a67c0826.json < /var/lib/heat-config/deployed/bd434bbe-2870-46bb-b351-f217a67c0826.notify.json", > "[2018-07-13 20:49:19,152] (heat-config) [INFO] ", > "[2018-07-13 20:49:19,152] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:49:19,353 p=5867 u=mistral | TASK [Check-mode for Run deployment CephStorageDeployment] ********************* >2018-07-13 20:49:19,354 p=5867 u=mistral | Friday 13 July 2018 20:49:19 -0400 (0:00:00.131) 0:02:42.541 *********** >2018-07-13 20:49:19,368 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:19,387 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:49:19,388 p=5867 u=mistral | Friday 13 July 2018 20:49:19 -0400 (0:00:00.034) 0:02:42.576 *********** >2018-07-13 20:49:19,508 p=5867 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "ffc7195d-c49e-4cc4-ab4f-1fd0a842c969"}, "changed": false} >2018-07-13 20:49:19,528 p=5867 u=mistral | TASK [Render deployment file for CephStorageHostsDeployment] ******************* >2018-07-13 20:49:19,528 p=5867 u=mistral | Friday 13 July 2018 20:49:19 -0400 (0:00:00.140) 0:02:42.716 *********** >2018-07-13 20:49:20,183 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "f96ab0a2bb28b56b1497c2b0ca17f9a6adf9ec0c", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageHostsDeployment-ffc7195d-c49e-4cc4-ab4f-1fd0a842c969", "gid": 0, "group": "root", "md5sum": "a2fc464c5bc9d158789bef288382890e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4431, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529359.65-127819065332983/source", "state": "file", "uid": 0} >2018-07-13 20:49:20,205 p=5867 u=mistral | TASK [Check if deployed file exists for CephStorageHostsDeployment] ************ >2018-07-13 20:49:20,206 p=5867 u=mistral | Friday 13 July 2018 20:49:20 -0400 (0:00:00.677) 0:02:43.394 *********** >2018-07-13 20:49:20,589 p=5867 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:49:20,611 p=5867 u=mistral | TASK [Check previous deployment rc for CephStorageHostsDeployment] ************* >2018-07-13 20:49:20,611 p=5867 u=mistral | Friday 13 July 2018 20:49:20 -0400 (0:00:00.405) 0:02:43.799 *********** >2018-07-13 20:49:20,634 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:20,700 p=5867 u=mistral | TASK [Remove deployed file for CephStorageHostsDeployment when previous deployment failed] *** >2018-07-13 20:49:20,700 p=5867 u=mistral | Friday 13 July 2018 20:49:20 -0400 (0:00:00.088) 0:02:43.888 *********** >2018-07-13 20:49:20,720 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:20,741 p=5867 u=mistral | TASK [Force remove deployed file for CephStorageHostsDeployment] *************** >2018-07-13 20:49:20,741 p=5867 u=mistral | Friday 13 July 2018 20:49:20 -0400 (0:00:00.041) 0:02:43.929 *********** >2018-07-13 20:49:20,764 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:20,785 p=5867 u=mistral | TASK [Run deployment CephStorageHostsDeployment] ******************************* >2018-07-13 20:49:20,785 p=5867 u=mistral | Friday 13 July 2018 20:49:20 -0400 (0:00:00.043) 0:02:43.973 *********** >2018-07-13 20:49:21,652 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/ffc7195d-c49e-4cc4-ab4f-1fd0a842c969.notify.json)", "delta": "0:00:00.503794", "end": "2018-07-13 20:49:21.582401", "rc": 0, "start": "2018-07-13 20:49:21.078607", "stderr": "[2018-07-13 20:49:21,103] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/ffc7195d-c49e-4cc4-ab4f-1fd0a842c969.json\n[2018-07-13 20:49:21,151] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-07-13 20:49:21,151] (heat-config) [DEBUG] [2018-07-13 20:49:21,124] (heat-config) [INFO] hosts=192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-07-13 20:49:21,124] (heat-config) [INFO] deploy_server_id=822c871f-59f2-416c-a0da-a7612346ffb2\n[2018-07-13 20:49:21,124] (heat-config) [INFO] deploy_action=CREATE\n[2018-07-13 20:49:21,124] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-itl6mnyqqpqr-0-msjtclmf73zb/9fb1ede9-cc2d-4fd1-8d56-0a2d721adabe\n[2018-07-13 20:49:21,124] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-07-13 20:49:21,124] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-07-13 20:49:21,124] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/ffc7195d-c49e-4cc4-ab4f-1fd0a842c969\n[2018-07-13 20:49:21,147] (heat-config) [INFO] \n[2018-07-13 20:49:21,147] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n++ hostname -s\n+ sed -i /ceph-0/d /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.14 overcloud.internalapi.localdomain\n10.0.0.108 overcloud.localdomain\n172.17.1.19 controller-0.localdomain controller-0\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.7 controller-0.management.localdomain controller-0.management\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.21 ceph-0.localdomain ceph-0\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-07-13 20:49:21,147] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/ffc7195d-c49e-4cc4-ab4f-1fd0a842c969\n\n[2018-07-13 20:49:21,151] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-07-13 20:49:21,152] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ffc7195d-c49e-4cc4-ab4f-1fd0a842c969.json < /var/lib/heat-config/deployed/ffc7195d-c49e-4cc4-ab4f-1fd0a842c969.notify.json\n[2018-07-13 20:49:21,576] (heat-config) [INFO] \n[2018-07-13 20:49:21,576] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:49:21,103] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/ffc7195d-c49e-4cc4-ab4f-1fd0a842c969.json", "[2018-07-13 20:49:21,151] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-07-13 20:49:21,151] (heat-config) [DEBUG] [2018-07-13 20:49:21,124] (heat-config) [INFO] hosts=192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-07-13 20:49:21,124] (heat-config) [INFO] deploy_server_id=822c871f-59f2-416c-a0da-a7612346ffb2", "[2018-07-13 20:49:21,124] (heat-config) [INFO] deploy_action=CREATE", "[2018-07-13 20:49:21,124] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-itl6mnyqqpqr-0-msjtclmf73zb/9fb1ede9-cc2d-4fd1-8d56-0a2d721adabe", "[2018-07-13 20:49:21,124] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-07-13 20:49:21,124] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-07-13 20:49:21,124] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/ffc7195d-c49e-4cc4-ab4f-1fd0a842c969", "[2018-07-13 20:49:21,147] (heat-config) [INFO] ", "[2018-07-13 20:49:21,147] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "++ hostname -s", "+ sed -i /ceph-0/d /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.14 overcloud.internalapi.localdomain", "10.0.0.108 overcloud.localdomain", "172.17.1.19 controller-0.localdomain controller-0", "172.17.3.20 controller-0.storage.localdomain controller-0.storage", "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.7 controller-0.management.localdomain controller-0.management", "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.17 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.21 ceph-0.localdomain ceph-0", "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.12 ceph-0.external.localdomain ceph-0.external", "192.168.24.12 ceph-0.management.localdomain ceph-0.management", "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-07-13 20:49:21,147] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/ffc7195d-c49e-4cc4-ab4f-1fd0a842c969", "", "[2018-07-13 20:49:21,151] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-07-13 20:49:21,152] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ffc7195d-c49e-4cc4-ab4f-1fd0a842c969.json < /var/lib/heat-config/deployed/ffc7195d-c49e-4cc4-ab4f-1fd0a842c969.notify.json", "[2018-07-13 20:49:21,576] (heat-config) [INFO] ", "[2018-07-13 20:49:21,576] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:49:21,687 p=5867 u=mistral | TASK [Output for CephStorageHostsDeployment] *********************************** >2018-07-13 20:49:21,687 p=5867 u=mistral | Friday 13 July 2018 20:49:21 -0400 (0:00:00.902) 0:02:44.875 *********** >2018-07-13 20:49:21,765 p=5867 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:49:21,103] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/ffc7195d-c49e-4cc4-ab4f-1fd0a842c969.json", > "[2018-07-13 20:49:21,151] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.10 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.14 overcloud.internalapi.localdomain\\n10.0.0.108 overcloud.localdomain\\n172.17.1.19 controller-0.localdomain controller-0\\n172.17.3.20 controller-0.storage.localdomain controller-0.storage\\n172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.7 controller-0.management.localdomain controller-0.management\\n192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.17 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.21 ceph-0.localdomain ceph-0\\n172.17.3.21 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.12 ceph-0.external.localdomain ceph-0.external\\n192.168.24.12 ceph-0.management.localdomain ceph-0.management\\n192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-07-13 20:49:21,151] (heat-config) [DEBUG] [2018-07-13 20:49:21,124] (heat-config) [INFO] hosts=192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-07-13 20:49:21,124] (heat-config) [INFO] deploy_server_id=822c871f-59f2-416c-a0da-a7612346ffb2", > "[2018-07-13 20:49:21,124] (heat-config) [INFO] deploy_action=CREATE", > "[2018-07-13 20:49:21,124] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-itl6mnyqqpqr-0-msjtclmf73zb/9fb1ede9-cc2d-4fd1-8d56-0a2d721adabe", > "[2018-07-13 20:49:21,124] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-07-13 20:49:21,124] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-07-13 20:49:21,124] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/ffc7195d-c49e-4cc4-ab4f-1fd0a842c969", > "[2018-07-13 20:49:21,147] (heat-config) [INFO] ", > "[2018-07-13 20:49:21,147] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.10 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.14 overcloud.internalapi.localdomain", > "10.0.0.108 overcloud.localdomain", > "172.17.1.19 controller-0.localdomain controller-0", > "172.17.3.20 controller-0.storage.localdomain controller-0.storage", > "172.17.4.18 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.19 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.7 controller-0.management.localdomain controller-0.management", > "192.168.24.7 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.17 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.21 ceph-0.localdomain ceph-0", > "172.17.3.21 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.19 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.12 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.12 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.12 ceph-0.external.localdomain ceph-0.external", > "192.168.24.12 ceph-0.management.localdomain ceph-0.management", > "192.168.24.12 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-07-13 20:49:21,147] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/ffc7195d-c49e-4cc4-ab4f-1fd0a842c969", > "", > "[2018-07-13 20:49:21,151] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-07-13 20:49:21,152] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ffc7195d-c49e-4cc4-ab4f-1fd0a842c969.json < /var/lib/heat-config/deployed/ffc7195d-c49e-4cc4-ab4f-1fd0a842c969.notify.json", > "[2018-07-13 20:49:21,576] (heat-config) [INFO] ", > "[2018-07-13 20:49:21,576] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:49:21,799 p=5867 u=mistral | TASK [Check-mode for Run deployment CephStorageHostsDeployment] **************** >2018-07-13 20:49:21,799 p=5867 u=mistral | Friday 13 July 2018 20:49:21 -0400 (0:00:00.112) 0:02:44.987 *********** >2018-07-13 20:49:21,815 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:21,833 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:49:21,833 p=5867 u=mistral | Friday 13 July 2018 20:49:21 -0400 (0:00:00.033) 0:02:45.021 *********** >2018-07-13 20:49:21,975 p=5867 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "f4814769-8557-4b1f-9e10-1dae2d44db31"}, "changed": false} >2018-07-13 20:49:21,995 p=5867 u=mistral | TASK [Render deployment file for CephStorageAllNodesDeployment] **************** >2018-07-13 20:49:21,995 p=5867 u=mistral | Friday 13 July 2018 20:49:21 -0400 (0:00:00.161) 0:02:45.183 *********** >2018-07-13 20:49:22,688 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "3d6ff0bd887aa35f1ff5913132035578cddbcb33", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageAllNodesDeployment-f4814769-8557-4b1f-9e10-1dae2d44db31", "gid": 0, "group": "root", "md5sum": "e594e907dab99eda91032af195513cad", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19024, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529362.14-73976133159163/source", "state": "file", "uid": 0} >2018-07-13 20:49:22,708 p=5867 u=mistral | TASK [Check if deployed file exists for CephStorageAllNodesDeployment] ********* >2018-07-13 20:49:22,708 p=5867 u=mistral | Friday 13 July 2018 20:49:22 -0400 (0:00:00.712) 0:02:45.896 *********** >2018-07-13 20:49:23,027 p=5867 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:49:23,048 p=5867 u=mistral | TASK [Check previous deployment rc for CephStorageAllNodesDeployment] ********** >2018-07-13 20:49:23,048 p=5867 u=mistral | Friday 13 July 2018 20:49:23 -0400 (0:00:00.340) 0:02:46.236 *********** >2018-07-13 20:49:23,066 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:23,086 p=5867 u=mistral | TASK [Remove deployed file for CephStorageAllNodesDeployment when previous deployment failed] *** >2018-07-13 20:49:23,086 p=5867 u=mistral | Friday 13 July 2018 20:49:23 -0400 (0:00:00.038) 0:02:46.274 *********** >2018-07-13 20:49:23,106 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:23,126 p=5867 u=mistral | TASK [Force remove deployed file for CephStorageAllNodesDeployment] ************ >2018-07-13 20:49:23,126 p=5867 u=mistral | Friday 13 July 2018 20:49:23 -0400 (0:00:00.039) 0:02:46.314 *********** >2018-07-13 20:49:23,145 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:23,165 p=5867 u=mistral | TASK [Run deployment CephStorageAllNodesDeployment] **************************** >2018-07-13 20:49:23,165 p=5867 u=mistral | Friday 13 July 2018 20:49:23 -0400 (0:00:00.038) 0:02:46.353 *********** >2018-07-13 20:49:24,065 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/f4814769-8557-4b1f-9e10-1dae2d44db31.notify.json)", "delta": "0:00:00.577114", "end": "2018-07-13 20:49:24.024568", "rc": 0, "start": "2018-07-13 20:49:23.447454", "stderr": "[2018-07-13 20:49:23,473] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/f4814769-8557-4b1f-9e10-1dae2d44db31.json\n[2018-07-13 20:49:23,598] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-07-13 20:49:23,599] (heat-config) [DEBUG] \n[2018-07-13 20:49:23,599] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-07-13 20:49:23,599] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f4814769-8557-4b1f-9e10-1dae2d44db31.json < /var/lib/heat-config/deployed/f4814769-8557-4b1f-9e10-1dae2d44db31.notify.json\n[2018-07-13 20:49:24,019] (heat-config) [INFO] \n[2018-07-13 20:49:24,019] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:49:23,473] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/f4814769-8557-4b1f-9e10-1dae2d44db31.json", "[2018-07-13 20:49:23,598] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-07-13 20:49:23,599] (heat-config) [DEBUG] ", "[2018-07-13 20:49:23,599] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-07-13 20:49:23,599] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f4814769-8557-4b1f-9e10-1dae2d44db31.json < /var/lib/heat-config/deployed/f4814769-8557-4b1f-9e10-1dae2d44db31.notify.json", "[2018-07-13 20:49:24,019] (heat-config) [INFO] ", "[2018-07-13 20:49:24,019] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:49:24,091 p=5867 u=mistral | TASK [Output for CephStorageAllNodesDeployment] ******************************** >2018-07-13 20:49:24,091 p=5867 u=mistral | Friday 13 July 2018 20:49:24 -0400 (0:00:00.925) 0:02:47.279 *********** >2018-07-13 20:49:24,143 p=5867 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:49:23,473] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/f4814769-8557-4b1f-9e10-1dae2d44db31.json", > "[2018-07-13 20:49:23,598] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-07-13 20:49:23,599] (heat-config) [DEBUG] ", > "[2018-07-13 20:49:23,599] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-07-13 20:49:23,599] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f4814769-8557-4b1f-9e10-1dae2d44db31.json < /var/lib/heat-config/deployed/f4814769-8557-4b1f-9e10-1dae2d44db31.notify.json", > "[2018-07-13 20:49:24,019] (heat-config) [INFO] ", > "[2018-07-13 20:49:24,019] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:49:24,163 p=5867 u=mistral | TASK [Check-mode for Run deployment CephStorageAllNodesDeployment] ************* >2018-07-13 20:49:24,163 p=5867 u=mistral | Friday 13 July 2018 20:49:24 -0400 (0:00:00.072) 0:02:47.351 *********** >2018-07-13 20:49:24,179 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:24,197 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:49:24,198 p=5867 u=mistral | Friday 13 July 2018 20:49:24 -0400 (0:00:00.034) 0:02:47.386 *********** >2018-07-13 20:49:24,256 p=5867 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "18b0d02d-8fd1-4147-9a3d-94ef965b815f"}, "changed": false} >2018-07-13 20:49:24,276 p=5867 u=mistral | TASK [Render deployment file for CephStorageAllNodesValidationDeployment] ****** >2018-07-13 20:49:24,276 p=5867 u=mistral | Friday 13 July 2018 20:49:24 -0400 (0:00:00.078) 0:02:47.464 *********** >2018-07-13 20:49:24,875 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "746d2a4e26a8f3e8bad53e55b5bb0ad56f6d7c79", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageAllNodesValidationDeployment-18b0d02d-8fd1-4147-9a3d-94ef965b815f", "gid": 0, "group": "root", "md5sum": "475edae9cfa2092a877f570d531e6d2b", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4942, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529364.33-229517195817941/source", "state": "file", "uid": 0} >2018-07-13 20:49:24,896 p=5867 u=mistral | TASK [Check if deployed file exists for CephStorageAllNodesValidationDeployment] *** >2018-07-13 20:49:24,896 p=5867 u=mistral | Friday 13 July 2018 20:49:24 -0400 (0:00:00.619) 0:02:48.084 *********** >2018-07-13 20:49:25,217 p=5867 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:49:25,238 p=5867 u=mistral | TASK [Check previous deployment rc for CephStorageAllNodesValidationDeployment] *** >2018-07-13 20:49:25,238 p=5867 u=mistral | Friday 13 July 2018 20:49:25 -0400 (0:00:00.341) 0:02:48.426 *********** >2018-07-13 20:49:25,259 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:25,280 p=5867 u=mistral | TASK [Remove deployed file for CephStorageAllNodesValidationDeployment when previous deployment failed] *** >2018-07-13 20:49:25,281 p=5867 u=mistral | Friday 13 July 2018 20:49:25 -0400 (0:00:00.042) 0:02:48.469 *********** >2018-07-13 20:49:25,299 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:25,319 p=5867 u=mistral | TASK [Force remove deployed file for CephStorageAllNodesValidationDeployment] *** >2018-07-13 20:49:25,319 p=5867 u=mistral | Friday 13 July 2018 20:49:25 -0400 (0:00:00.038) 0:02:48.507 *********** >2018-07-13 20:49:25,336 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:25,356 p=5867 u=mistral | TASK [Run deployment CephStorageAllNodesValidationDeployment] ****************** >2018-07-13 20:49:25,356 p=5867 u=mistral | Friday 13 July 2018 20:49:25 -0400 (0:00:00.037) 0:02:48.544 *********** >2018-07-13 20:49:26,688 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/18b0d02d-8fd1-4147-9a3d-94ef965b815f.notify.json)", "delta": "0:00:01.005890", "end": "2018-07-13 20:49:26.645616", "rc": 0, "start": "2018-07-13 20:49:25.639726", "stderr": "[2018-07-13 20:49:25,664] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/18b0d02d-8fd1-4147-9a3d-94ef965b815f.json\n[2018-07-13 20:49:26,214] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.106 for local network 10.0.0.0/24.\\nPing to 10.0.0.106 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.20 for local network 172.17.3.0/24.\\nPing to 172.17.3.20 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.18 for local network 172.17.4.0/24.\\nPing to 172.17.4.18 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.7 for local network 192.168.24.0/24.\\nPing to 192.168.24.7 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-07-13 20:49:26,214] (heat-config) [DEBUG] [2018-07-13 20:49:25,684] (heat-config) [INFO] ping_test_ips=172.17.3.20 172.17.4.18 172.17.1.19 172.17.2.15 10.0.0.106 192.168.24.7\n[2018-07-13 20:49:25,685] (heat-config) [INFO] validate_fqdn=False\n[2018-07-13 20:49:25,685] (heat-config) [INFO] validate_ntp=True\n[2018-07-13 20:49:25,685] (heat-config) [INFO] deploy_server_id=822c871f-59f2-416c-a0da-a7612346ffb2\n[2018-07-13 20:49:25,685] (heat-config) [INFO] deploy_action=CREATE\n[2018-07-13 20:49:25,685] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-gfgxbbmcczmn-0-pikomgw4n56z/ee195e2a-ef74-42f0-8556-38579fe1747e\n[2018-07-13 20:49:25,685] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-07-13 20:49:25,685] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-07-13 20:49:25,685] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/18b0d02d-8fd1-4147-9a3d-94ef965b815f\n[2018-07-13 20:49:26,210] (heat-config) [INFO] Trying to ping 10.0.0.106 for local network 10.0.0.0/24.\nPing to 10.0.0.106 succeeded.\nSUCCESS\nTrying to ping 172.17.3.20 for local network 172.17.3.0/24.\nPing to 172.17.3.20 succeeded.\nSUCCESS\nTrying to ping 172.17.4.18 for local network 172.17.4.0/24.\nPing to 172.17.4.18 succeeded.\nSUCCESS\nTrying to ping 192.168.24.7 for local network 192.168.24.0/24.\nPing to 192.168.24.7 succeeded.\nSUCCESS\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\nSUCCESS\n\n[2018-07-13 20:49:26,210] (heat-config) [DEBUG] \n[2018-07-13 20:49:26,210] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/18b0d02d-8fd1-4147-9a3d-94ef965b815f\n\n[2018-07-13 20:49:26,214] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-07-13 20:49:26,214] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/18b0d02d-8fd1-4147-9a3d-94ef965b815f.json < /var/lib/heat-config/deployed/18b0d02d-8fd1-4147-9a3d-94ef965b815f.notify.json\n[2018-07-13 20:49:26,639] (heat-config) [INFO] \n[2018-07-13 20:49:26,640] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:49:25,664] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/18b0d02d-8fd1-4147-9a3d-94ef965b815f.json", "[2018-07-13 20:49:26,214] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.106 for local network 10.0.0.0/24.\\nPing to 10.0.0.106 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.20 for local network 172.17.3.0/24.\\nPing to 172.17.3.20 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.18 for local network 172.17.4.0/24.\\nPing to 172.17.4.18 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.7 for local network 192.168.24.0/24.\\nPing to 192.168.24.7 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-07-13 20:49:26,214] (heat-config) [DEBUG] [2018-07-13 20:49:25,684] (heat-config) [INFO] ping_test_ips=172.17.3.20 172.17.4.18 172.17.1.19 172.17.2.15 10.0.0.106 192.168.24.7", "[2018-07-13 20:49:25,685] (heat-config) [INFO] validate_fqdn=False", "[2018-07-13 20:49:25,685] (heat-config) [INFO] validate_ntp=True", "[2018-07-13 20:49:25,685] (heat-config) [INFO] deploy_server_id=822c871f-59f2-416c-a0da-a7612346ffb2", "[2018-07-13 20:49:25,685] (heat-config) [INFO] deploy_action=CREATE", "[2018-07-13 20:49:25,685] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-gfgxbbmcczmn-0-pikomgw4n56z/ee195e2a-ef74-42f0-8556-38579fe1747e", "[2018-07-13 20:49:25,685] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-07-13 20:49:25,685] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-07-13 20:49:25,685] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/18b0d02d-8fd1-4147-9a3d-94ef965b815f", "[2018-07-13 20:49:26,210] (heat-config) [INFO] Trying to ping 10.0.0.106 for local network 10.0.0.0/24.", "Ping to 10.0.0.106 succeeded.", "SUCCESS", "Trying to ping 172.17.3.20 for local network 172.17.3.0/24.", "Ping to 172.17.3.20 succeeded.", "SUCCESS", "Trying to ping 172.17.4.18 for local network 172.17.4.0/24.", "Ping to 172.17.4.18 succeeded.", "SUCCESS", "Trying to ping 192.168.24.7 for local network 192.168.24.0/24.", "Ping to 192.168.24.7 succeeded.", "SUCCESS", "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", "SUCCESS", "", "[2018-07-13 20:49:26,210] (heat-config) [DEBUG] ", "[2018-07-13 20:49:26,210] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/18b0d02d-8fd1-4147-9a3d-94ef965b815f", "", "[2018-07-13 20:49:26,214] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-07-13 20:49:26,214] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/18b0d02d-8fd1-4147-9a3d-94ef965b815f.json < /var/lib/heat-config/deployed/18b0d02d-8fd1-4147-9a3d-94ef965b815f.notify.json", "[2018-07-13 20:49:26,639] (heat-config) [INFO] ", "[2018-07-13 20:49:26,640] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:49:26,710 p=5867 u=mistral | TASK [Output for CephStorageAllNodesValidationDeployment] ********************** >2018-07-13 20:49:26,710 p=5867 u=mistral | Friday 13 July 2018 20:49:26 -0400 (0:00:01.354) 0:02:49.898 *********** >2018-07-13 20:49:26,760 p=5867 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:49:25,664] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/18b0d02d-8fd1-4147-9a3d-94ef965b815f.json", > "[2018-07-13 20:49:26,214] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.106 for local network 10.0.0.0/24.\\nPing to 10.0.0.106 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.20 for local network 172.17.3.0/24.\\nPing to 172.17.3.20 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.18 for local network 172.17.4.0/24.\\nPing to 172.17.4.18 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.7 for local network 192.168.24.0/24.\\nPing to 192.168.24.7 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-07-13 20:49:26,214] (heat-config) [DEBUG] [2018-07-13 20:49:25,684] (heat-config) [INFO] ping_test_ips=172.17.3.20 172.17.4.18 172.17.1.19 172.17.2.15 10.0.0.106 192.168.24.7", > "[2018-07-13 20:49:25,685] (heat-config) [INFO] validate_fqdn=False", > "[2018-07-13 20:49:25,685] (heat-config) [INFO] validate_ntp=True", > "[2018-07-13 20:49:25,685] (heat-config) [INFO] deploy_server_id=822c871f-59f2-416c-a0da-a7612346ffb2", > "[2018-07-13 20:49:25,685] (heat-config) [INFO] deploy_action=CREATE", > "[2018-07-13 20:49:25,685] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-gfgxbbmcczmn-0-pikomgw4n56z/ee195e2a-ef74-42f0-8556-38579fe1747e", > "[2018-07-13 20:49:25,685] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-07-13 20:49:25,685] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-07-13 20:49:25,685] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/18b0d02d-8fd1-4147-9a3d-94ef965b815f", > "[2018-07-13 20:49:26,210] (heat-config) [INFO] Trying to ping 10.0.0.106 for local network 10.0.0.0/24.", > "Ping to 10.0.0.106 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.20 for local network 172.17.3.0/24.", > "Ping to 172.17.3.20 succeeded.", > "SUCCESS", > "Trying to ping 172.17.4.18 for local network 172.17.4.0/24.", > "Ping to 172.17.4.18 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.7 for local network 192.168.24.0/24.", > "Ping to 192.168.24.7 succeeded.", > "SUCCESS", > "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", > "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", > "SUCCESS", > "", > "[2018-07-13 20:49:26,210] (heat-config) [DEBUG] ", > "[2018-07-13 20:49:26,210] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/18b0d02d-8fd1-4147-9a3d-94ef965b815f", > "", > "[2018-07-13 20:49:26,214] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-07-13 20:49:26,214] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/18b0d02d-8fd1-4147-9a3d-94ef965b815f.json < /var/lib/heat-config/deployed/18b0d02d-8fd1-4147-9a3d-94ef965b815f.notify.json", > "[2018-07-13 20:49:26,639] (heat-config) [INFO] ", > "[2018-07-13 20:49:26,640] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:49:26,781 p=5867 u=mistral | TASK [Check-mode for Run deployment CephStorageAllNodesValidationDeployment] *** >2018-07-13 20:49:26,781 p=5867 u=mistral | Friday 13 July 2018 20:49:26 -0400 (0:00:00.070) 0:02:49.969 *********** >2018-07-13 20:49:26,796 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:26,816 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:49:26,816 p=5867 u=mistral | Friday 13 July 2018 20:49:26 -0400 (0:00:00.034) 0:02:50.004 *********** >2018-07-13 20:49:26,866 p=5867 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "47f15d28-9bed-4eec-8c61-e316abeae15a"}, "changed": false} >2018-07-13 20:49:26,887 p=5867 u=mistral | TASK [Render deployment file for CephStorageArtifactsDeploy] ******************* >2018-07-13 20:49:26,888 p=5867 u=mistral | Friday 13 July 2018 20:49:26 -0400 (0:00:00.071) 0:02:50.076 *********** >2018-07-13 20:49:27,493 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "bc9d666d6d80db041bfd5baf447140691915b0ed", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageArtifactsDeploy-47f15d28-9bed-4eec-8c61-e316abeae15a", "gid": 0, "group": "root", "md5sum": "464160a776f3f36c466f978f7c8b3ba1", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2023, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529366.94-84111077291486/source", "state": "file", "uid": 0} >2018-07-13 20:49:27,514 p=5867 u=mistral | TASK [Check if deployed file exists for CephStorageArtifactsDeploy] ************ >2018-07-13 20:49:27,514 p=5867 u=mistral | Friday 13 July 2018 20:49:27 -0400 (0:00:00.626) 0:02:50.702 *********** >2018-07-13 20:49:27,836 p=5867 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:49:27,858 p=5867 u=mistral | TASK [Check previous deployment rc for CephStorageArtifactsDeploy] ************* >2018-07-13 20:49:27,858 p=5867 u=mistral | Friday 13 July 2018 20:49:27 -0400 (0:00:00.344) 0:02:51.046 *********** >2018-07-13 20:49:27,880 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:27,902 p=5867 u=mistral | TASK [Remove deployed file for CephStorageArtifactsDeploy when previous deployment failed] *** >2018-07-13 20:49:27,902 p=5867 u=mistral | Friday 13 July 2018 20:49:27 -0400 (0:00:00.043) 0:02:51.090 *********** >2018-07-13 20:49:27,923 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:27,945 p=5867 u=mistral | TASK [Force remove deployed file for CephStorageArtifactsDeploy] *************** >2018-07-13 20:49:27,945 p=5867 u=mistral | Friday 13 July 2018 20:49:27 -0400 (0:00:00.043) 0:02:51.133 *********** >2018-07-13 20:49:27,965 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:27,987 p=5867 u=mistral | TASK [Run deployment CephStorageArtifactsDeploy] ******************************* >2018-07-13 20:49:27,988 p=5867 u=mistral | Friday 13 July 2018 20:49:27 -0400 (0:00:00.042) 0:02:51.176 *********** >2018-07-13 20:49:28,799 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/47f15d28-9bed-4eec-8c61-e316abeae15a.notify.json)", "delta": "0:00:00.477724", "end": "2018-07-13 20:49:28.757953", "rc": 0, "start": "2018-07-13 20:49:28.280229", "stderr": "[2018-07-13 20:49:28,304] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/47f15d28-9bed-4eec-8c61-e316abeae15a.json\n[2018-07-13 20:49:28,334] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-07-13 20:49:28,334] (heat-config) [DEBUG] [2018-07-13 20:49:28,325] (heat-config) [INFO] artifact_urls=\n[2018-07-13 20:49:28,326] (heat-config) [INFO] deploy_server_id=822c871f-59f2-416c-a0da-a7612346ffb2\n[2018-07-13 20:49:28,326] (heat-config) [INFO] deploy_action=CREATE\n[2018-07-13 20:49:28,326] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-nwaxeaw6ioho-CephStorageArtifactsDeploy-hi2huhetqjn7-0-cpumixpidqak/e27284c4-0df3-4fdc-8ccb-2731e9dd1678\n[2018-07-13 20:49:28,326] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-07-13 20:49:28,326] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-07-13 20:49:28,326] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/47f15d28-9bed-4eec-8c61-e316abeae15a\n[2018-07-13 20:49:28,331] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-07-13 20:49:28,331] (heat-config) [DEBUG] \n[2018-07-13 20:49:28,331] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/47f15d28-9bed-4eec-8c61-e316abeae15a\n\n[2018-07-13 20:49:28,334] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-07-13 20:49:28,335] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/47f15d28-9bed-4eec-8c61-e316abeae15a.json < /var/lib/heat-config/deployed/47f15d28-9bed-4eec-8c61-e316abeae15a.notify.json\n[2018-07-13 20:49:28,752] (heat-config) [INFO] \n[2018-07-13 20:49:28,752] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:49:28,304] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/47f15d28-9bed-4eec-8c61-e316abeae15a.json", "[2018-07-13 20:49:28,334] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-07-13 20:49:28,334] (heat-config) [DEBUG] [2018-07-13 20:49:28,325] (heat-config) [INFO] artifact_urls=", "[2018-07-13 20:49:28,326] (heat-config) [INFO] deploy_server_id=822c871f-59f2-416c-a0da-a7612346ffb2", "[2018-07-13 20:49:28,326] (heat-config) [INFO] deploy_action=CREATE", "[2018-07-13 20:49:28,326] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-nwaxeaw6ioho-CephStorageArtifactsDeploy-hi2huhetqjn7-0-cpumixpidqak/e27284c4-0df3-4fdc-8ccb-2731e9dd1678", "[2018-07-13 20:49:28,326] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-07-13 20:49:28,326] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-07-13 20:49:28,326] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/47f15d28-9bed-4eec-8c61-e316abeae15a", "[2018-07-13 20:49:28,331] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-07-13 20:49:28,331] (heat-config) [DEBUG] ", "[2018-07-13 20:49:28,331] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/47f15d28-9bed-4eec-8c61-e316abeae15a", "", "[2018-07-13 20:49:28,334] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-07-13 20:49:28,335] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/47f15d28-9bed-4eec-8c61-e316abeae15a.json < /var/lib/heat-config/deployed/47f15d28-9bed-4eec-8c61-e316abeae15a.notify.json", "[2018-07-13 20:49:28,752] (heat-config) [INFO] ", "[2018-07-13 20:49:28,752] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:49:28,819 p=5867 u=mistral | TASK [Output for CephStorageArtifactsDeploy] *********************************** >2018-07-13 20:49:28,819 p=5867 u=mistral | Friday 13 July 2018 20:49:28 -0400 (0:00:00.831) 0:02:52.007 *********** >2018-07-13 20:49:28,871 p=5867 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:49:28,304] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/47f15d28-9bed-4eec-8c61-e316abeae15a.json", > "[2018-07-13 20:49:28,334] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-07-13 20:49:28,334] (heat-config) [DEBUG] [2018-07-13 20:49:28,325] (heat-config) [INFO] artifact_urls=", > "[2018-07-13 20:49:28,326] (heat-config) [INFO] deploy_server_id=822c871f-59f2-416c-a0da-a7612346ffb2", > "[2018-07-13 20:49:28,326] (heat-config) [INFO] deploy_action=CREATE", > "[2018-07-13 20:49:28,326] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-nwaxeaw6ioho-CephStorageArtifactsDeploy-hi2huhetqjn7-0-cpumixpidqak/e27284c4-0df3-4fdc-8ccb-2731e9dd1678", > "[2018-07-13 20:49:28,326] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-07-13 20:49:28,326] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-07-13 20:49:28,326] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/47f15d28-9bed-4eec-8c61-e316abeae15a", > "[2018-07-13 20:49:28,331] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-07-13 20:49:28,331] (heat-config) [DEBUG] ", > "[2018-07-13 20:49:28,331] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/47f15d28-9bed-4eec-8c61-e316abeae15a", > "", > "[2018-07-13 20:49:28,334] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-07-13 20:49:28,335] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/47f15d28-9bed-4eec-8c61-e316abeae15a.json < /var/lib/heat-config/deployed/47f15d28-9bed-4eec-8c61-e316abeae15a.notify.json", > "[2018-07-13 20:49:28,752] (heat-config) [INFO] ", > "[2018-07-13 20:49:28,752] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:49:28,892 p=5867 u=mistral | TASK [Check-mode for Run deployment CephStorageArtifactsDeploy] **************** >2018-07-13 20:49:28,892 p=5867 u=mistral | Friday 13 July 2018 20:49:28 -0400 (0:00:00.072) 0:02:52.080 *********** >2018-07-13 20:49:28,907 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:28,927 p=5867 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-07-13 20:49:28,927 p=5867 u=mistral | Friday 13 July 2018 20:49:28 -0400 (0:00:00.035) 0:02:52.115 *********** >2018-07-13 20:49:28,996 p=5867 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "3467c92f-2ab6-4a24-a40c-ab1492c2bc53"}, "changed": false} >2018-07-13 20:49:29,017 p=5867 u=mistral | TASK [Render deployment file for CephStorageHostPrepDeployment] **************** >2018-07-13 20:49:29,017 p=5867 u=mistral | Friday 13 July 2018 20:49:29 -0400 (0:00:00.090) 0:02:52.205 *********** >2018-07-13 20:49:29,630 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "0a3c7dc8ddcadb61a3d8a402dbbb15f547b72b68", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageHostPrepDeployment-3467c92f-2ab6-4a24-a40c-ab1492c2bc53", "gid": 0, "group": "root", "md5sum": "b30cd9ba22a35b54872e70c9ef890540", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 20736, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529369.09-262455769577648/source", "state": "file", "uid": 0} >2018-07-13 20:49:29,650 p=5867 u=mistral | TASK [Check if deployed file exists for CephStorageHostPrepDeployment] ********* >2018-07-13 20:49:29,650 p=5867 u=mistral | Friday 13 July 2018 20:49:29 -0400 (0:00:00.632) 0:02:52.838 *********** >2018-07-13 20:49:29,970 p=5867 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:49:29,991 p=5867 u=mistral | TASK [Check previous deployment rc for CephStorageHostPrepDeployment] ********** >2018-07-13 20:49:29,992 p=5867 u=mistral | Friday 13 July 2018 20:49:29 -0400 (0:00:00.341) 0:02:53.180 *********** >2018-07-13 20:49:30,010 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:30,033 p=5867 u=mistral | TASK [Remove deployed file for CephStorageHostPrepDeployment when previous deployment failed] *** >2018-07-13 20:49:30,033 p=5867 u=mistral | Friday 13 July 2018 20:49:30 -0400 (0:00:00.041) 0:02:53.221 *********** >2018-07-13 20:49:30,051 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:30,073 p=5867 u=mistral | TASK [Force remove deployed file for CephStorageHostPrepDeployment] ************ >2018-07-13 20:49:30,074 p=5867 u=mistral | Friday 13 July 2018 20:49:30 -0400 (0:00:00.040) 0:02:53.261 *********** >2018-07-13 20:49:30,091 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:30,111 p=5867 u=mistral | TASK [Run deployment CephStorageHostPrepDeployment] **************************** >2018-07-13 20:49:30,111 p=5867 u=mistral | Friday 13 July 2018 20:49:30 -0400 (0:00:00.037) 0:02:53.299 *********** >2018-07-13 20:49:41,893 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/3467c92f-2ab6-4a24-a40c-ab1492c2bc53.notify.json)", "delta": "0:00:12.020494", "end": "2018-07-13 20:49:42.417894", "rc": 0, "start": "2018-07-13 20:49:30.397400", "stderr": "[2018-07-13 20:49:30,421] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/3467c92f-2ab6-4a24-a40c-ab1492c2bc53.json\n[2018-07-13 20:49:42,010] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [NTP settings] ************************************************************\\nok: [localhost]\\n\\nTASK [Install ntpdate] *********************************************************\\nskipping: [localhost]\\n\\nTASK [Ensure system is NTP time synced] ****************************************\\nchanged: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=5 changed=3 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-07-13 20:49:42,010] (heat-config) [DEBUG] [2018-07-13 20:49:30,442] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/3467c92f-2ab6-4a24-a40c-ab1492c2bc53_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/3467c92f-2ab6-4a24-a40c-ab1492c2bc53_variables.json\n[2018-07-13 20:49:42,007] (heat-config) [INFO] Return code 0\n[2018-07-13 20:49:42,007] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [NTP settings] ************************************************************\nok: [localhost]\n\nTASK [Install ntpdate] *********************************************************\nskipping: [localhost]\n\nTASK [Ensure system is NTP time synced] ****************************************\nchanged: [localhost]\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=5 changed=3 unreachable=0 failed=0 \n\n\n[2018-07-13 20:49:42,007] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/3467c92f-2ab6-4a24-a40c-ab1492c2bc53_playbook.yaml\n\n[2018-07-13 20:49:42,010] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-07-13 20:49:42,011] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3467c92f-2ab6-4a24-a40c-ab1492c2bc53.json < /var/lib/heat-config/deployed/3467c92f-2ab6-4a24-a40c-ab1492c2bc53.notify.json\n[2018-07-13 20:49:42,412] (heat-config) [INFO] \n[2018-07-13 20:49:42,412] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-07-13 20:49:30,421] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/3467c92f-2ab6-4a24-a40c-ab1492c2bc53.json", "[2018-07-13 20:49:42,010] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [NTP settings] ************************************************************\\nok: [localhost]\\n\\nTASK [Install ntpdate] *********************************************************\\nskipping: [localhost]\\n\\nTASK [Ensure system is NTP time synced] ****************************************\\nchanged: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=5 changed=3 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-07-13 20:49:42,010] (heat-config) [DEBUG] [2018-07-13 20:49:30,442] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/3467c92f-2ab6-4a24-a40c-ab1492c2bc53_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/3467c92f-2ab6-4a24-a40c-ab1492c2bc53_variables.json", "[2018-07-13 20:49:42,007] (heat-config) [INFO] Return code 0", "[2018-07-13 20:49:42,007] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [NTP settings] ************************************************************", "ok: [localhost]", "", "TASK [Install ntpdate] *********************************************************", "skipping: [localhost]", "", "TASK [Ensure system is NTP time synced] ****************************************", "changed: [localhost]", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=5 changed=3 unreachable=0 failed=0 ", "", "", "[2018-07-13 20:49:42,007] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/3467c92f-2ab6-4a24-a40c-ab1492c2bc53_playbook.yaml", "", "[2018-07-13 20:49:42,010] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-07-13 20:49:42,011] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3467c92f-2ab6-4a24-a40c-ab1492c2bc53.json < /var/lib/heat-config/deployed/3467c92f-2ab6-4a24-a40c-ab1492c2bc53.notify.json", "[2018-07-13 20:49:42,412] (heat-config) [INFO] ", "[2018-07-13 20:49:42,412] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-07-13 20:49:41,913 p=5867 u=mistral | TASK [Output for CephStorageHostPrepDeployment] ******************************** >2018-07-13 20:49:41,914 p=5867 u=mistral | Friday 13 July 2018 20:49:41 -0400 (0:00:11.802) 0:03:05.102 *********** >2018-07-13 20:49:41,964 p=5867 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-07-13 20:49:30,421] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/3467c92f-2ab6-4a24-a40c-ab1492c2bc53.json", > "[2018-07-13 20:49:42,010] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [NTP settings] ************************************************************\\nok: [localhost]\\n\\nTASK [Install ntpdate] *********************************************************\\nskipping: [localhost]\\n\\nTASK [Ensure system is NTP time synced] ****************************************\\nchanged: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=5 changed=3 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-07-13 20:49:42,010] (heat-config) [DEBUG] [2018-07-13 20:49:30,442] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/3467c92f-2ab6-4a24-a40c-ab1492c2bc53_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/3467c92f-2ab6-4a24-a40c-ab1492c2bc53_variables.json", > "[2018-07-13 20:49:42,007] (heat-config) [INFO] Return code 0", > "[2018-07-13 20:49:42,007] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [NTP settings] ************************************************************", > "ok: [localhost]", > "", > "TASK [Install ntpdate] *********************************************************", > "skipping: [localhost]", > "", > "TASK [Ensure system is NTP time synced] ****************************************", > "changed: [localhost]", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=5 changed=3 unreachable=0 failed=0 ", > "", > "", > "[2018-07-13 20:49:42,007] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/3467c92f-2ab6-4a24-a40c-ab1492c2bc53_playbook.yaml", > "", > "[2018-07-13 20:49:42,010] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-07-13 20:49:42,011] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3467c92f-2ab6-4a24-a40c-ab1492c2bc53.json < /var/lib/heat-config/deployed/3467c92f-2ab6-4a24-a40c-ab1492c2bc53.notify.json", > "[2018-07-13 20:49:42,412] (heat-config) [INFO] ", > "[2018-07-13 20:49:42,412] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-07-13 20:49:41,985 p=5867 u=mistral | TASK [Check-mode for Run deployment CephStorageHostPrepDeployment] ************* >2018-07-13 20:49:41,985 p=5867 u=mistral | Friday 13 July 2018 20:49:41 -0400 (0:00:00.071) 0:03:05.173 *********** >2018-07-13 20:49:42,002 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:42,007 p=5867 u=mistral | PLAY [Host prep steps] ********************************************************* >2018-07-13 20:49:42,045 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:49:42,045 p=5867 u=mistral | Friday 13 July 2018 20:49:42 -0400 (0:00:00.059) 0:03:05.233 *********** >2018-07-13 20:49:42,100 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/aodh) => {"changed": false, "item": "/var/log/containers/aodh", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:42,101 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "item": "/var/log/containers/httpd/aodh-api", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:42,120 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/aodh) => {"changed": false, "item": "/var/log/containers/aodh", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:42,123 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "item": "/var/log/containers/httpd/aodh-api", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:42,412 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/aodh) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/aodh", "mode": "0755", "owner": "root", "path": "/var/log/containers/aodh", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:42,729 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/aodh-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/aodh-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:42,754 p=5867 u=mistral | TASK [aodh logs readme] ******************************************************** >2018-07-13 20:49:42,754 p=5867 u=mistral | Friday 13 July 2018 20:49:42 -0400 (0:00:00.709) 0:03:05.942 *********** >2018-07-13 20:49:42,808 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:42,823 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:43,387 p=5867 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "b6cf6dbe054f430c33d39c1a1a88593536d6e659", "msg": "Destination directory /var/log/aodh does not exist"} >2018-07-13 20:49:43,387 p=5867 u=mistral | ...ignoring >2018-07-13 20:49:43,412 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:49:43,412 p=5867 u=mistral | Friday 13 July 2018 20:49:43 -0400 (0:00:00.657) 0:03:06.600 *********** >2018-07-13 20:49:43,469 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:43,487 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:43,762 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/aodh", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:43,784 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:49:43,785 p=5867 u=mistral | Friday 13 July 2018 20:49:43 -0400 (0:00:00.372) 0:03:06.972 *********** >2018-07-13 20:49:43,840 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:43,855 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:44,134 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/ceilometer", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:44,157 p=5867 u=mistral | TASK [ceilometer logs readme] ************************************************** >2018-07-13 20:49:44,157 p=5867 u=mistral | Friday 13 July 2018 20:49:44 -0400 (0:00:00.372) 0:03:07.345 *********** >2018-07-13 20:49:44,211 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:44,225 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:44,782 p=5867 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3", "msg": "Destination directory /var/log/ceilometer does not exist"} >2018-07-13 20:49:44,782 p=5867 u=mistral | ...ignoring >2018-07-13 20:49:44,805 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:49:44,805 p=5867 u=mistral | Friday 13 July 2018 20:49:44 -0400 (0:00:00.647) 0:03:07.993 *********** >2018-07-13 20:49:44,874 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:44,875 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "item": "/var/log/containers/httpd/cinder-api", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:44,886 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:44,889 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "item": "/var/log/containers/httpd/cinder-api", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:45,207 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:45,526 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/cinder-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/cinder-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:45,549 p=5867 u=mistral | TASK [cinder logs readme] ****************************************************** >2018-07-13 20:49:45,549 p=5867 u=mistral | Friday 13 July 2018 20:49:45 -0400 (0:00:00.744) 0:03:08.737 *********** >2018-07-13 20:49:45,603 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:45,619 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:46,162 p=5867 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292", "msg": "Destination directory /var/log/cinder does not exist"} >2018-07-13 20:49:46,162 p=5867 u=mistral | ...ignoring >2018-07-13 20:49:46,187 p=5867 u=mistral | TASK [create persistent directories] ******************************************* >2018-07-13 20:49:46,187 p=5867 u=mistral | Friday 13 July 2018 20:49:46 -0400 (0:00:00.637) 0:03:09.375 *********** >2018-07-13 20:49:46,245 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:46,246 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:46,261 p=5867 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:46,266 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:46,541 p=5867 u=mistral | ok: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/cinder", "mode": "0755", "owner": "root", "path": "/var/lib/cinder", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:46,862 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:46,887 p=5867 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-07-13 20:49:46,887 p=5867 u=mistral | Friday 13 July 2018 20:49:46 -0400 (0:00:00.700) 0:03:10.075 *********** >2018-07-13 20:49:46,939 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:46,954 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:47,243 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:47,267 p=5867 u=mistral | TASK [create persistent directories] ******************************************* >2018-07-13 20:49:47,268 p=5867 u=mistral | Friday 13 July 2018 20:49:47 -0400 (0:00:00.380) 0:03:10.456 *********** >2018-07-13 20:49:47,324 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:47,353 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:47,630 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:47,653 p=5867 u=mistral | TASK [create persistent directories] ******************************************* >2018-07-13 20:49:47,653 p=5867 u=mistral | Friday 13 July 2018 20:49:47 -0400 (0:00:00.385) 0:03:10.841 *********** >2018-07-13 20:49:47,707 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:47,709 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:47,728 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:47,730 p=5867 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:48,001 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:48,323 p=5867 u=mistral | ok: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/cinder", "mode": "0755", "owner": "root", "path": "/var/lib/cinder", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:48,348 p=5867 u=mistral | TASK [cinder_enable_iscsi_backend fact] **************************************** >2018-07-13 20:49:48,348 p=5867 u=mistral | Friday 13 July 2018 20:49:48 -0400 (0:00:00.695) 0:03:11.536 *********** >2018-07-13 20:49:48,402 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:48,403 p=5867 u=mistral | ok: [controller-0] => {"ansible_facts": {"cinder_enable_iscsi_backend": false}, "changed": false} >2018-07-13 20:49:48,418 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:48,442 p=5867 u=mistral | TASK [cinder create LVM volume group dd] *************************************** >2018-07-13 20:49:48,443 p=5867 u=mistral | Friday 13 July 2018 20:49:48 -0400 (0:00:00.094) 0:03:11.630 *********** >2018-07-13 20:49:48,498 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:48,499 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:48,510 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:48,534 p=5867 u=mistral | TASK [cinder create LVM volume group] ****************************************** >2018-07-13 20:49:48,535 p=5867 u=mistral | Friday 13 July 2018 20:49:48 -0400 (0:00:00.092) 0:03:11.723 *********** >2018-07-13 20:49:48,563 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:48,589 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:48,601 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:48,623 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:49:48,623 p=5867 u=mistral | Friday 13 July 2018 20:49:48 -0400 (0:00:00.088) 0:03:11.811 *********** >2018-07-13 20:49:48,680 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/glance) => {"changed": false, "item": "/var/log/containers/glance", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:48,700 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/glance) => {"changed": false, "item": "/var/log/containers/glance", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:49,015 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/glance) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/glance", "mode": "0755", "owner": "root", "path": "/var/log/containers/glance", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:49,042 p=5867 u=mistral | TASK [glance logs readme] ****************************************************** >2018-07-13 20:49:49,043 p=5867 u=mistral | Friday 13 July 2018 20:49:49 -0400 (0:00:00.419) 0:03:12.230 *********** >2018-07-13 20:49:49,100 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:49,116 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:49,681 p=5867 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "e368ae3272baeb19e1113009ea5dae00e797c919", "msg": "Destination directory /var/log/glance does not exist"} >2018-07-13 20:49:49,682 p=5867 u=mistral | ...ignoring >2018-07-13 20:49:49,705 p=5867 u=mistral | TASK [set_fact] **************************************************************** >2018-07-13 20:49:49,706 p=5867 u=mistral | Friday 13 July 2018 20:49:49 -0400 (0:00:00.662) 0:03:12.893 *********** >2018-07-13 20:49:49,735 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:49,760 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:49,772 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:49,794 p=5867 u=mistral | TASK [file] ******************************************************************** >2018-07-13 20:49:49,794 p=5867 u=mistral | Friday 13 July 2018 20:49:49 -0400 (0:00:00.088) 0:03:12.982 *********** >2018-07-13 20:49:49,821 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:49,848 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:49,860 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:49,882 p=5867 u=mistral | TASK [stat] ******************************************************************** >2018-07-13 20:49:49,882 p=5867 u=mistral | Friday 13 July 2018 20:49:49 -0400 (0:00:00.088) 0:03:13.070 *********** >2018-07-13 20:49:49,912 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:49,937 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:49,950 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:49,972 p=5867 u=mistral | TASK [copy] ******************************************************************** >2018-07-13 20:49:49,972 p=5867 u=mistral | Friday 13 July 2018 20:49:49 -0400 (0:00:00.089) 0:03:13.160 *********** >2018-07-13 20:49:50,002 p=5867 u=mistral | skipping: [controller-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:50,028 p=5867 u=mistral | skipping: [ceph-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:50,042 p=5867 u=mistral | skipping: [compute-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:50,064 p=5867 u=mistral | TASK [mount] ******************************************************************* >2018-07-13 20:49:50,064 p=5867 u=mistral | Friday 13 July 2018 20:49:50 -0400 (0:00:00.092) 0:03:13.252 *********** >2018-07-13 20:49:50,094 p=5867 u=mistral | skipping: [controller-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:50,122 p=5867 u=mistral | skipping: [ceph-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:50,145 p=5867 u=mistral | skipping: [compute-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:50,219 p=5867 u=mistral | TASK [Mount NFS on host] ******************************************************* >2018-07-13 20:49:50,219 p=5867 u=mistral | Friday 13 July 2018 20:49:50 -0400 (0:00:00.154) 0:03:13.407 *********** >2018-07-13 20:49:50,250 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:50,275 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:50,286 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:50,309 p=5867 u=mistral | TASK [Mount Node Staging Location] ********************************************* >2018-07-13 20:49:50,309 p=5867 u=mistral | Friday 13 July 2018 20:49:50 -0400 (0:00:00.089) 0:03:13.497 *********** >2018-07-13 20:49:50,338 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:50,362 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:50,374 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:50,396 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:49:50,397 p=5867 u=mistral | Friday 13 July 2018 20:49:50 -0400 (0:00:00.087) 0:03:13.584 *********** >2018-07-13 20:49:50,453 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "item": "/var/log/containers/gnocchi", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:50,454 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "item": "/var/log/containers/httpd/gnocchi-api", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:50,466 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "item": "/var/log/containers/gnocchi", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:50,479 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "item": "/var/log/containers/httpd/gnocchi-api", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:50,767 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/gnocchi", "mode": "0755", "owner": "root", "path": "/var/log/containers/gnocchi", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:51,109 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/gnocchi-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/gnocchi-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:51,133 p=5867 u=mistral | TASK [gnocchi logs readme] ***************************************************** >2018-07-13 20:49:51,133 p=5867 u=mistral | Friday 13 July 2018 20:49:51 -0400 (0:00:00.736) 0:03:14.321 *********** >2018-07-13 20:49:51,191 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:51,210 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:51,798 p=5867 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "2f6114e0f135d7222e70a07579ab0b2b6f967ff8", "msg": "Destination directory /var/log/gnocchi does not exist"} >2018-07-13 20:49:51,798 p=5867 u=mistral | ...ignoring >2018-07-13 20:49:51,820 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:49:51,820 p=5867 u=mistral | Friday 13 July 2018 20:49:51 -0400 (0:00:00.686) 0:03:15.008 *********** >2018-07-13 20:49:51,877 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:51,885 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,192 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/gnocchi", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:52,218 p=5867 u=mistral | TASK [get parameters] ********************************************************** >2018-07-13 20:49:52,218 p=5867 u=mistral | Friday 13 July 2018 20:49:52 -0400 (0:00:00.398) 0:03:15.406 *********** >2018-07-13 20:49:52,277 p=5867 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:49:52,278 p=5867 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:49:52,289 p=5867 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:49:52,312 p=5867 u=mistral | TASK [get DeployedSSLCertificatePath attributes] ******************************* >2018-07-13 20:49:52,312 p=5867 u=mistral | Friday 13 July 2018 20:49:52 -0400 (0:00:00.093) 0:03:15.500 *********** >2018-07-13 20:49:52,342 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,366 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,379 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,402 p=5867 u=mistral | TASK [Assign bootstrap node] *************************************************** >2018-07-13 20:49:52,403 p=5867 u=mistral | Friday 13 July 2018 20:49:52 -0400 (0:00:00.090) 0:03:15.590 *********** >2018-07-13 20:49:52,432 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,457 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,475 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,500 p=5867 u=mistral | TASK [set is_bootstrap_node fact] ********************************************** >2018-07-13 20:49:52,500 p=5867 u=mistral | Friday 13 July 2018 20:49:52 -0400 (0:00:00.097) 0:03:15.688 *********** >2018-07-13 20:49:52,530 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,554 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,567 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,590 p=5867 u=mistral | TASK [get haproxy status] ****************************************************** >2018-07-13 20:49:52,590 p=5867 u=mistral | Friday 13 July 2018 20:49:52 -0400 (0:00:00.090) 0:03:15.778 *********** >2018-07-13 20:49:52,644 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,645 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,656 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,680 p=5867 u=mistral | TASK [get pacemaker status] **************************************************** >2018-07-13 20:49:52,680 p=5867 u=mistral | Friday 13 July 2018 20:49:52 -0400 (0:00:00.090) 0:03:15.868 *********** >2018-07-13 20:49:52,709 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,736 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,748 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,771 p=5867 u=mistral | TASK [get docker status] ******************************************************* >2018-07-13 20:49:52,771 p=5867 u=mistral | Friday 13 July 2018 20:49:52 -0400 (0:00:00.090) 0:03:15.959 *********** >2018-07-13 20:49:52,802 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,830 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,846 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,869 p=5867 u=mistral | TASK [get container_id] ******************************************************** >2018-07-13 20:49:52,869 p=5867 u=mistral | Friday 13 July 2018 20:49:52 -0400 (0:00:00.098) 0:03:16.057 *********** >2018-07-13 20:49:52,899 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,924 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,937 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:52,959 p=5867 u=mistral | TASK [get pcs resource name for haproxy container] ***************************** >2018-07-13 20:49:52,959 p=5867 u=mistral | Friday 13 July 2018 20:49:52 -0400 (0:00:00.089) 0:03:16.147 *********** >2018-07-13 20:49:52,988 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,013 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,025 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,047 p=5867 u=mistral | TASK [remove DeployedSSLCertificatePath if is dir] ***************************** >2018-07-13 20:49:53,047 p=5867 u=mistral | Friday 13 July 2018 20:49:53 -0400 (0:00:00.088) 0:03:16.235 *********** >2018-07-13 20:49:53,076 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,106 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,120 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,142 p=5867 u=mistral | TASK [push certificate content] ************************************************ >2018-07-13 20:49:53,143 p=5867 u=mistral | Friday 13 July 2018 20:49:53 -0400 (0:00:00.095) 0:03:16.330 *********** >2018-07-13 20:49:53,173 p=5867 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:49:53,202 p=5867 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:49:53,213 p=5867 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:49:53,237 p=5867 u=mistral | TASK [set certificate ownership] *********************************************** >2018-07-13 20:49:53,237 p=5867 u=mistral | Friday 13 July 2018 20:49:53 -0400 (0:00:00.094) 0:03:16.425 *********** >2018-07-13 20:49:53,264 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,288 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,301 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,323 p=5867 u=mistral | TASK [reload haproxy if enabled] *********************************************** >2018-07-13 20:49:53,323 p=5867 u=mistral | Friday 13 July 2018 20:49:53 -0400 (0:00:00.086) 0:03:16.511 *********** >2018-07-13 20:49:53,352 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,376 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,394 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,419 p=5867 u=mistral | TASK [restart pacemaker resource for haproxy] ********************************** >2018-07-13 20:49:53,419 p=5867 u=mistral | Friday 13 July 2018 20:49:53 -0400 (0:00:00.095) 0:03:16.607 *********** >2018-07-13 20:49:53,449 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,474 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,488 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,510 p=5867 u=mistral | TASK [set kolla_dir fact] ****************************************************** >2018-07-13 20:49:53,510 p=5867 u=mistral | Friday 13 July 2018 20:49:53 -0400 (0:00:00.090) 0:03:16.698 *********** >2018-07-13 20:49:53,538 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,564 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,576 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,598 p=5867 u=mistral | TASK [set certificate group on host via container] ***************************** >2018-07-13 20:49:53,598 p=5867 u=mistral | Friday 13 July 2018 20:49:53 -0400 (0:00:00.088) 0:03:16.786 *********** >2018-07-13 20:49:53,628 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,652 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,664 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,687 p=5867 u=mistral | TASK [copy certificate from kolla directory to final location] ***************** >2018-07-13 20:49:53,687 p=5867 u=mistral | Friday 13 July 2018 20:49:53 -0400 (0:00:00.088) 0:03:16.875 *********** >2018-07-13 20:49:53,744 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,745 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,756 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,778 p=5867 u=mistral | TASK [send restart order to haproxy container] ********************************* >2018-07-13 20:49:53,778 p=5867 u=mistral | Friday 13 July 2018 20:49:53 -0400 (0:00:00.091) 0:03:16.966 *********** >2018-07-13 20:49:53,806 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,831 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,843 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,865 p=5867 u=mistral | TASK [create persistent directories] ******************************************* >2018-07-13 20:49:53,865 p=5867 u=mistral | Friday 13 July 2018 20:49:53 -0400 (0:00:00.086) 0:03:17.053 *********** >2018-07-13 20:49:53,920 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/lib/haproxy) => {"changed": false, "item": "/var/lib/haproxy", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:53,939 p=5867 u=mistral | skipping: [compute-0] => (item=/var/lib/haproxy) => {"changed": false, "item": "/var/lib/haproxy", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:54,225 p=5867 u=mistral | ok: [controller-0] => (item=/var/lib/haproxy) => {"changed": false, "gid": 188, "group": "haproxy", "item": "/var/lib/haproxy", "mode": "0755", "owner": "haproxy", "path": "/var/lib/haproxy", "secontext": "system_u:object_r:haproxy_var_lib_t:s0", "size": 6, "state": "directory", "uid": 188} >2018-07-13 20:49:54,249 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:49:54,249 p=5867 u=mistral | Friday 13 July 2018 20:49:54 -0400 (0:00:00.384) 0:03:17.437 *********** >2018-07-13 20:49:54,308 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:54,309 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "item": "/var/log/containers/httpd/heat-api", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:54,324 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:54,329 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "item": "/var/log/containers/httpd/heat-api", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:54,614 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/heat) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/heat", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:54,941 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/heat-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/heat-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:54,966 p=5867 u=mistral | TASK [heat logs readme] ******************************************************** >2018-07-13 20:49:54,966 p=5867 u=mistral | Friday 13 July 2018 20:49:54 -0400 (0:00:00.716) 0:03:18.154 *********** >2018-07-13 20:49:55,022 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:55,040 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:55,594 p=5867 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "d30ca3bda176434d31659e7379616dd162ddb246", "msg": "Destination directory /var/log/heat does not exist"} >2018-07-13 20:49:55,594 p=5867 u=mistral | ...ignoring >2018-07-13 20:49:55,620 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:49:55,620 p=5867 u=mistral | Friday 13 July 2018 20:49:55 -0400 (0:00:00.653) 0:03:18.808 *********** >2018-07-13 20:49:55,678 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:55,679 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "item": "/var/log/containers/httpd/heat-api-cfn", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:55,694 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:55,699 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "item": "/var/log/containers/httpd/heat-api-cfn", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:55,991 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/heat) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/heat", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:56,316 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/heat-api-cfn", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/heat-api-cfn", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:56,339 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:49:56,339 p=5867 u=mistral | Friday 13 July 2018 20:49:56 -0400 (0:00:00.719) 0:03:19.527 *********** >2018-07-13 20:49:56,389 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:56,405 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:56,697 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:56,723 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:49:56,723 p=5867 u=mistral | Friday 13 July 2018 20:49:56 -0400 (0:00:00.383) 0:03:19.911 *********** >2018-07-13 20:49:56,778 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/horizon) => {"changed": false, "item": "/var/log/containers/horizon", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:56,779 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "item": "/var/log/containers/httpd/horizon", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:56,793 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/horizon) => {"changed": false, "item": "/var/log/containers/horizon", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:56,797 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "item": "/var/log/containers/httpd/horizon", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:57,143 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/horizon) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/horizon", "mode": "0755", "owner": "root", "path": "/var/log/containers/horizon", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:57,467 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/horizon", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/horizon", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:57,490 p=5867 u=mistral | TASK [horizon logs readme] ***************************************************** >2018-07-13 20:49:57,490 p=5867 u=mistral | Friday 13 July 2018 20:49:57 -0400 (0:00:00.766) 0:03:20.678 *********** >2018-07-13 20:49:57,545 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:57,558 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:58,184 p=5867 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ac324739761cb36b925d6e309482e26f7fe49b91", "msg": "Destination directory /var/log/horizon does not exist"} >2018-07-13 20:49:58,184 p=5867 u=mistral | ...ignoring >2018-07-13 20:49:58,244 p=5867 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-07-13 20:49:58,244 p=5867 u=mistral | Friday 13 July 2018 20:49:58 -0400 (0:00:00.753) 0:03:21.432 *********** >2018-07-13 20:49:58,295 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:58,310 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:58,601 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"atime": 1531529264.3006272, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "424de87cd6ae66547b285288742255731a46ab83", "ctime": 1531493185.9624753, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 921945, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1513292517.0, "nlink": 1, "path": "/lib/systemd/system/iscsid.socket", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 175, "uid": 0, "version": "1355952737", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-07-13 20:49:58,625 p=5867 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-07-13 20:49:58,625 p=5867 u=mistral | Friday 13 July 2018 20:49:58 -0400 (0:00:00.381) 0:03:21.813 *********** >2018-07-13 20:49:58,678 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:58,691 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:59,105 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "enabled": false, "name": "iscsid.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "sysinit.target -.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Backlog": "128", "Before": "sockets.target iscsid.service shutdown.target", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Open-iSCSI iscsid Socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "Documentation": "man:iscsid(8) man:iscsiadm(8)", "FragmentPath": "/usr/lib/systemd/system/iscsid.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "iscsid.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "127792", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127792", "LimitSTACK": "18446744073709551615", "ListenStream": "@ISCSIADM_ABSTRACT_NAMESPACE", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "iscsid.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "Requires": "sysinit.target", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "iscsid.service", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "-.slice"}} >2018-07-13 20:49:59,129 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:49:59,129 p=5867 u=mistral | Friday 13 July 2018 20:49:59 -0400 (0:00:00.504) 0:03:22.317 *********** >2018-07-13 20:49:59,185 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/keystone) => {"changed": false, "item": "/var/log/containers/keystone", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:59,187 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "item": "/var/log/containers/httpd/keystone", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:59,201 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/keystone) => {"changed": false, "item": "/var/log/containers/keystone", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:59,207 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "item": "/var/log/containers/httpd/keystone", "skip_reason": "Conditional result was False"} >2018-07-13 20:49:59,492 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/keystone) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/keystone", "mode": "0755", "owner": "root", "path": "/var/log/containers/keystone", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:59,808 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/keystone", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/keystone", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:49:59,832 p=5867 u=mistral | TASK [keystone logs readme] **************************************************** >2018-07-13 20:49:59,833 p=5867 u=mistral | Friday 13 July 2018 20:49:59 -0400 (0:00:00.703) 0:03:23.021 *********** >2018-07-13 20:49:59,887 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:49:59,901 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:00,464 p=5867 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "910be882addb6df99267e9bd303f6d9bf658562e", "msg": "Destination directory /var/log/keystone does not exist"} >2018-07-13 20:50:00,464 p=5867 u=mistral | ...ignoring >2018-07-13 20:50:00,488 p=5867 u=mistral | TASK [memcached logs readme] *************************************************** >2018-07-13 20:50:00,488 p=5867 u=mistral | Friday 13 July 2018 20:50:00 -0400 (0:00:00.655) 0:03:23.676 *********** >2018-07-13 20:50:00,545 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:00,560 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:01,065 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "3b6f3952a077d2e5003df30c8c439478917cb6c4", "dest": "/var/log/memcached-readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/memcached-readme.txt", "secontext": "system_u:object_r:var_log_t:s0", "size": 48, "state": "file", "uid": 0} >2018-07-13 20:50:01,088 p=5867 u=mistral | TASK [create persistent directories] ******************************************* >2018-07-13 20:50:01,088 p=5867 u=mistral | Friday 13 July 2018 20:50:01 -0400 (0:00:00.599) 0:03:24.276 *********** >2018-07-13 20:50:01,146 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/mysql) => {"changed": false, "item": "/var/log/containers/mysql", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:01,147 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/lib/mysql) => {"changed": false, "item": "/var/lib/mysql", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:01,159 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/mysql) => {"changed": false, "item": "/var/log/containers/mysql", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:01,166 p=5867 u=mistral | skipping: [compute-0] => (item=/var/lib/mysql) => {"changed": false, "item": "/var/lib/mysql", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:01,451 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/mysql) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/mysql", "mode": "0755", "owner": "root", "path": "/var/log/containers/mysql", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:01,767 p=5867 u=mistral | ok: [controller-0] => (item=/var/lib/mysql) => {"changed": false, "gid": 27, "group": "mysql", "item": "/var/lib/mysql", "mode": "0755", "owner": "mysql", "path": "/var/lib/mysql", "secontext": "system_u:object_r:mysqld_db_t:s0", "size": 6, "state": "directory", "uid": 27} >2018-07-13 20:50:01,793 p=5867 u=mistral | TASK [mysql logs readme] ******************************************************* >2018-07-13 20:50:01,794 p=5867 u=mistral | Friday 13 July 2018 20:50:01 -0400 (0:00:00.705) 0:03:24.981 *********** >2018-07-13 20:50:01,848 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:01,861 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:02,375 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "de8fb5fe96200ab286121f8a09419702bd693743", "dest": "/var/log/mariadb/readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/mariadb/readme.txt", "secontext": "system_u:object_r:mysqld_log_t:s0", "size": 78, "state": "file", "uid": 0} >2018-07-13 20:50:02,399 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:50:02,399 p=5867 u=mistral | Friday 13 July 2018 20:50:02 -0400 (0:00:00.605) 0:03:25.587 *********** >2018-07-13 20:50:02,451 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:02,452 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "item": "/var/log/containers/httpd/neutron-api", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:02,468 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:02,473 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "item": "/var/log/containers/httpd/neutron-api", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:02,752 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:03,070 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/neutron-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/neutron-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:03,093 p=5867 u=mistral | TASK [neutron logs readme] ***************************************************** >2018-07-13 20:50:03,094 p=5867 u=mistral | Friday 13 July 2018 20:50:03 -0400 (0:00:00.694) 0:03:26.282 *********** >2018-07-13 20:50:03,149 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:03,163 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:03,722 p=5867 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "f5a95f434a4aad25a9a81a045dec39159a6e8864", "msg": "Destination directory /var/log/neutron does not exist"} >2018-07-13 20:50:03,722 p=5867 u=mistral | ...ignoring >2018-07-13 20:50:03,746 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:50:03,746 p=5867 u=mistral | Friday 13 July 2018 20:50:03 -0400 (0:00:00.652) 0:03:26.934 *********** >2018-07-13 20:50:03,798 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:03,815 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:04,098 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:04,123 p=5867 u=mistral | TASK [create /var/lib/neutron] ************************************************* >2018-07-13 20:50:04,123 p=5867 u=mistral | Friday 13 July 2018 20:50:04 -0400 (0:00:00.376) 0:03:27.311 *********** >2018-07-13 20:50:04,178 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:04,193 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:04,477 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/neutron", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:04,499 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:50:04,500 p=5867 u=mistral | Friday 13 July 2018 20:50:04 -0400 (0:00:00.376) 0:03:27.688 *********** >2018-07-13 20:50:04,552 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:04,553 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "item": "/var/log/containers/httpd/nova-api", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:04,567 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:04,574 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "item": "/var/log/containers/httpd/nova-api", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:04,872 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/nova", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:05,189 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/nova-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/nova-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:05,216 p=5867 u=mistral | TASK [nova logs readme] ******************************************************** >2018-07-13 20:50:05,216 p=5867 u=mistral | Friday 13 July 2018 20:50:05 -0400 (0:00:00.716) 0:03:28.404 *********** >2018-07-13 20:50:05,276 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:05,290 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:05,836 p=5867 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "c2216cc4edf5d3ce90f10748c3243db4e1842a85", "msg": "Destination directory /var/log/nova does not exist"} >2018-07-13 20:50:05,837 p=5867 u=mistral | ...ignoring >2018-07-13 20:50:05,859 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:50:05,859 p=5867 u=mistral | Friday 13 July 2018 20:50:05 -0400 (0:00:00.642) 0:03:29.047 *********** >2018-07-13 20:50:05,910 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:05,925 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:06,207 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:06,233 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:50:06,234 p=5867 u=mistral | Friday 13 July 2018 20:50:06 -0400 (0:00:00.374) 0:03:29.422 *********** >2018-07-13 20:50:06,297 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:06,298 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "item": "/var/log/containers/httpd/nova-placement", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:06,314 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:06,319 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "item": "/var/log/containers/httpd/nova-placement", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:06,595 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/nova", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:06,907 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/nova-placement", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/nova-placement", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:06,932 p=5867 u=mistral | TASK [NTP settings] ************************************************************ >2018-07-13 20:50:06,932 p=5867 u=mistral | Friday 13 July 2018 20:50:06 -0400 (0:00:00.698) 0:03:30.120 *********** >2018-07-13 20:50:06,991 p=5867 u=mistral | ok: [controller-0] => {"ansible_facts": {"ntp_install_packages": false, "ntp_servers": ["10.35.255.6"]}, "changed": false} >2018-07-13 20:50:06,992 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:07,003 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:07,026 p=5867 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-07-13 20:50:07,027 p=5867 u=mistral | Friday 13 July 2018 20:50:07 -0400 (0:00:00.094) 0:03:30.215 *********** >2018-07-13 20:50:07,056 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:07,081 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:07,099 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:07,121 p=5867 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-07-13 20:50:07,121 p=5867 u=mistral | Friday 13 July 2018 20:50:07 -0400 (0:00:00.094) 0:03:30.309 *********** >2018-07-13 20:50:07,180 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:07,197 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:13,729 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "cmd": ["ntpdate", "-u", "10.35.255.6"], "delta": "0:00:06.262938", "end": "2018-07-13 20:50:14.249444", "rc": 0, "start": "2018-07-13 20:50:07.986506", "stderr": "", "stderr_lines": [], "stdout": "13 Jul 20:50:14 ntpdate[22641]: adjust time server 10.35.255.6 offset -0.002505 sec", "stdout_lines": ["13 Jul 20:50:14 ntpdate[22641]: adjust time server 10.35.255.6 offset -0.002505 sec"]} >2018-07-13 20:50:13,752 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:50:13,752 p=5867 u=mistral | Friday 13 July 2018 20:50:13 -0400 (0:00:06.630) 0:03:36.940 *********** >2018-07-13 20:50:13,846 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/panko) => {"changed": false, "item": "/var/log/containers/panko", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:13,847 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "item": "/var/log/containers/httpd/panko-api", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:13,860 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/panko) => {"changed": false, "item": "/var/log/containers/panko", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:13,868 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "item": "/var/log/containers/httpd/panko-api", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:14,165 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/panko) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/panko", "mode": "0755", "owner": "root", "path": "/var/log/containers/panko", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:14,488 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/panko-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/panko-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:14,513 p=5867 u=mistral | TASK [panko logs readme] ******************************************************* >2018-07-13 20:50:14,513 p=5867 u=mistral | Friday 13 July 2018 20:50:14 -0400 (0:00:00.761) 0:03:37.701 *********** >2018-07-13 20:50:14,569 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:14,584 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:15,137 p=5867 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "903397bbd82e9b1f53087e3d7e8975d851857ce2", "msg": "Destination directory /var/log/panko does not exist"} >2018-07-13 20:50:15,137 p=5867 u=mistral | ...ignoring >2018-07-13 20:50:15,164 p=5867 u=mistral | TASK [create persistent directories] ******************************************* >2018-07-13 20:50:15,164 p=5867 u=mistral | Friday 13 July 2018 20:50:15 -0400 (0:00:00.650) 0:03:38.352 *********** >2018-07-13 20:50:15,242 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/lib/rabbitmq) => {"changed": false, "item": "/var/lib/rabbitmq", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:15,244 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "item": "/var/log/containers/rabbitmq", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:15,264 p=5867 u=mistral | skipping: [compute-0] => (item=/var/lib/rabbitmq) => {"changed": false, "item": "/var/lib/rabbitmq", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:15,269 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "item": "/var/log/containers/rabbitmq", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:15,521 p=5867 u=mistral | ok: [controller-0] => (item=/var/lib/rabbitmq) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/rabbitmq", "mode": "0755", "owner": "root", "path": "/var/lib/rabbitmq", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:15,836 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/rabbitmq", "mode": "0755", "owner": "root", "path": "/var/log/containers/rabbitmq", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:15,863 p=5867 u=mistral | TASK [rabbitmq logs readme] **************************************************** >2018-07-13 20:50:15,863 p=5867 u=mistral | Friday 13 July 2018 20:50:15 -0400 (0:00:00.699) 0:03:39.051 *********** >2018-07-13 20:50:15,925 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:15,939 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:16,478 p=5867 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ee241f2199f264c9d0f384cf389fe255e8bf8a77", "msg": "Destination directory /var/log/rabbitmq does not exist"} >2018-07-13 20:50:16,478 p=5867 u=mistral | ...ignoring >2018-07-13 20:50:16,504 p=5867 u=mistral | TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] *** >2018-07-13 20:50:16,504 p=5867 u=mistral | Friday 13 July 2018 20:50:16 -0400 (0:00:00.640) 0:03:39.692 *********** >2018-07-13 20:50:16,562 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:16,575 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:16,871 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "echo 'export ERL_EPMD_ADDRESS=127.0.0.1' > /etc/rabbitmq/rabbitmq-env.conf\n echo 'export ERL_EPMD_PORT=4370' >> /etc/rabbitmq/rabbitmq-env.conf\n for pid in $(pgrep epmd --ns 1 --nslist pid); do kill $pid; done", "delta": "0:00:00.020734", "end": "2018-07-13 20:50:17.392138", "rc": 0, "start": "2018-07-13 20:50:17.371404", "stderr": "/bin/sh: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory\n/bin/sh: line 1: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory", "stderr_lines": ["/bin/sh: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory", "/bin/sh: line 1: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory"], "stdout": "", "stdout_lines": []} >2018-07-13 20:50:16,893 p=5867 u=mistral | TASK [create persistent directories] ******************************************* >2018-07-13 20:50:16,894 p=5867 u=mistral | Friday 13 July 2018 20:50:16 -0400 (0:00:00.389) 0:03:40.082 *********** >2018-07-13 20:50:16,963 p=5867 u=mistral | skipping: [compute-0] => (item=/var/lib/redis) => {"changed": false, "item": "/var/lib/redis", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:16,973 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/lib/redis) => {"changed": false, "item": "/var/lib/redis", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:16,973 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/redis) => {"changed": false, "item": "/var/log/containers/redis", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:16,974 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/run/redis) => {"changed": false, "item": "/var/run/redis", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:16,977 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/redis) => {"changed": false, "item": "/var/log/containers/redis", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:16,977 p=5867 u=mistral | skipping: [compute-0] => (item=/var/run/redis) => {"changed": false, "item": "/var/run/redis", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:17,267 p=5867 u=mistral | ok: [controller-0] => (item=/var/lib/redis) => {"changed": false, "gid": 988, "group": "redis", "item": "/var/lib/redis", "mode": "0750", "owner": "redis", "path": "/var/lib/redis", "secontext": "system_u:object_r:redis_var_lib_t:s0", "size": 6, "state": "directory", "uid": 992} >2018-07-13 20:50:17,577 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers/redis) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/redis", "mode": "0755", "owner": "root", "path": "/var/log/containers/redis", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:17,898 p=5867 u=mistral | ok: [controller-0] => (item=/var/run/redis) => {"changed": false, "gid": 988, "group": "redis", "item": "/var/run/redis", "mode": "0755", "owner": "redis", "path": "/var/run/redis", "secontext": "system_u:object_r:redis_var_run_t:s0", "size": 40, "state": "directory", "uid": 992} >2018-07-13 20:50:17,924 p=5867 u=mistral | TASK [redis logs readme] ******************************************************* >2018-07-13 20:50:17,925 p=5867 u=mistral | Friday 13 July 2018 20:50:17 -0400 (0:00:01.030) 0:03:41.113 *********** >2018-07-13 20:50:17,984 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:17,999 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:18,502 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "42d03af8abf93e87fdb3fc69702638fc81d943fb", "dest": "/var/log/redis/readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/redis/readme.txt", "secontext": "system_u:object_r:redis_log_t:s0", "size": 78, "state": "file", "uid": 0} >2018-07-13 20:50:18,526 p=5867 u=mistral | TASK [create /var/lib/sahara] ************************************************** >2018-07-13 20:50:18,527 p=5867 u=mistral | Friday 13 July 2018 20:50:18 -0400 (0:00:00.602) 0:03:41.715 *********** >2018-07-13 20:50:18,593 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:18,610 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:18,899 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/sahara", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:18,926 p=5867 u=mistral | TASK [create persistent sahara logs directory] ********************************* >2018-07-13 20:50:18,927 p=5867 u=mistral | Friday 13 July 2018 20:50:18 -0400 (0:00:00.399) 0:03:42.115 *********** >2018-07-13 20:50:18,983 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:18,998 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:19,289 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/sahara", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:19,313 p=5867 u=mistral | TASK [sahara logs readme] ****************************************************** >2018-07-13 20:50:19,313 p=5867 u=mistral | Friday 13 July 2018 20:50:19 -0400 (0:00:00.386) 0:03:42.501 *********** >2018-07-13 20:50:19,372 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:19,389 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:19,916 p=5867 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "b0212a1177fa4a88502d17a1cbc31198040cf047", "msg": "Destination directory /var/log/sahara does not exist"} >2018-07-13 20:50:19,916 p=5867 u=mistral | ...ignoring >2018-07-13 20:50:19,940 p=5867 u=mistral | TASK [create persistent directories] ******************************************* >2018-07-13 20:50:19,940 p=5867 u=mistral | Friday 13 July 2018 20:50:19 -0400 (0:00:00.627) 0:03:43.128 *********** >2018-07-13 20:50:20,000 p=5867 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:20,002 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:20,017 p=5867 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:20,024 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:20,296 p=5867 u=mistral | ok: [controller-0] => (item=/srv/node) => {"changed": false, "gid": 0, "group": "root", "item": "/srv/node", "mode": "0755", "owner": "root", "path": "/srv/node", "secontext": "unconfined_u:object_r:var_t:s0", "size": 16, "state": "directory", "uid": 0} >2018-07-13 20:50:20,602 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/swift) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/swift", "mode": "0755", "owner": "root", "path": "/var/log/swift", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 24, "state": "directory", "uid": 0} >2018-07-13 20:50:20,627 p=5867 u=mistral | TASK [Create swift logging symlink] ******************************************** >2018-07-13 20:50:20,627 p=5867 u=mistral | Friday 13 July 2018 20:50:20 -0400 (0:00:00.686) 0:03:43.815 *********** >2018-07-13 20:50:20,683 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:20,701 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:20,988 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "dest": "/var/log/containers/swift", "gid": 0, "group": "root", "mode": "0777", "owner": "root", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 14, "src": "/var/log/swift", "state": "link", "uid": 0} >2018-07-13 20:50:21,011 p=5867 u=mistral | TASK [create persistent directories] ******************************************* >2018-07-13 20:50:21,012 p=5867 u=mistral | Friday 13 July 2018 20:50:21 -0400 (0:00:00.384) 0:03:44.200 *********** >2018-07-13 20:50:21,070 p=5867 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:21,073 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:21,074 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:21,089 p=5867 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:21,094 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:21,099 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:21,375 p=5867 u=mistral | ok: [controller-0] => (item=/srv/node) => {"changed": false, "gid": 0, "group": "root", "item": "/srv/node", "mode": "0755", "owner": "root", "path": "/srv/node", "secontext": "unconfined_u:object_r:var_t:s0", "size": 16, "state": "directory", "uid": 0} >2018-07-13 20:50:21,682 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/swift) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/swift", "mode": "0755", "owner": "root", "path": "/var/log/swift", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 24, "state": "directory", "uid": 0} >2018-07-13 20:50:21,994 p=5867 u=mistral | ok: [controller-0] => (item=/var/log/containers) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers", "mode": "0755", "owner": "root", "path": "/var/log/containers", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 244, "state": "directory", "uid": 0} >2018-07-13 20:50:22,021 p=5867 u=mistral | TASK [Set swift_use_local_disks fact] ****************************************** >2018-07-13 20:50:22,021 p=5867 u=mistral | Friday 13 July 2018 20:50:22 -0400 (0:00:01.009) 0:03:45.209 *********** >2018-07-13 20:50:22,082 p=5867 u=mistral | ok: [controller-0] => {"ansible_facts": {"swift_use_local_disks": true}, "changed": false} >2018-07-13 20:50:22,083 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:22,095 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:22,118 p=5867 u=mistral | TASK [Create Swift d1 directory if needed] ************************************* >2018-07-13 20:50:22,119 p=5867 u=mistral | Friday 13 July 2018 20:50:22 -0400 (0:00:00.097) 0:03:45.307 *********** >2018-07-13 20:50:22,179 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:22,195 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:22,477 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/srv/node/d1", "secontext": "unconfined_u:object_r:var_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:22,500 p=5867 u=mistral | TASK [swift logs readme] ******************************************************* >2018-07-13 20:50:22,500 p=5867 u=mistral | Friday 13 July 2018 20:50:22 -0400 (0:00:00.381) 0:03:45.688 *********** >2018-07-13 20:50:22,553 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:22,566 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:23,087 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "42510a6de124722d6efbc2b1bb038bfe97e5b6d3", "dest": "/var/log/swift/readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/swift/readme.txt", "secontext": "system_u:object_r:var_log_t:s0", "size": 116, "state": "file", "uid": 0} >2018-07-13 20:50:23,113 p=5867 u=mistral | TASK [Format SwiftRawDisks] **************************************************** >2018-07-13 20:50:23,113 p=5867 u=mistral | Friday 13 July 2018 20:50:23 -0400 (0:00:00.613) 0:03:46.301 *********** >2018-07-13 20:50:23,201 p=5867 u=mistral | TASK [Mount devices defined in SwiftRawDisks] ********************************** >2018-07-13 20:50:23,202 p=5867 u=mistral | Friday 13 July 2018 20:50:23 -0400 (0:00:00.088) 0:03:46.390 *********** >2018-07-13 20:50:23,287 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:50:23,287 p=5867 u=mistral | Friday 13 July 2018 20:50:23 -0400 (0:00:00.085) 0:03:46.475 *********** >2018-07-13 20:50:23,315 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:23,339 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:23,682 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/ceilometer", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:23,705 p=5867 u=mistral | TASK [ceilometer logs readme] ************************************************** >2018-07-13 20:50:23,705 p=5867 u=mistral | Friday 13 July 2018 20:50:23 -0400 (0:00:00.418) 0:03:46.893 *********** >2018-07-13 20:50:23,734 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:23,762 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:24,376 p=5867 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3", "msg": "Destination directory /var/log/ceilometer does not exist"} >2018-07-13 20:50:24,377 p=5867 u=mistral | ...ignoring >2018-07-13 20:50:24,399 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:50:24,399 p=5867 u=mistral | Friday 13 July 2018 20:50:24 -0400 (0:00:00.693) 0:03:47.587 *********** >2018-07-13 20:50:24,429 p=5867 u=mistral | skipping: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:24,454 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:24,801 p=5867 u=mistral | ok: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:24,825 p=5867 u=mistral | TASK [neutron logs readme] ***************************************************** >2018-07-13 20:50:24,826 p=5867 u=mistral | Friday 13 July 2018 20:50:24 -0400 (0:00:00.426) 0:03:48.014 *********** >2018-07-13 20:50:24,855 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:24,882 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:25,551 p=5867 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "f5a95f434a4aad25a9a81a045dec39159a6e8864", "msg": "Destination directory /var/log/neutron does not exist"} >2018-07-13 20:50:25,551 p=5867 u=mistral | ...ignoring >2018-07-13 20:50:25,574 p=5867 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-07-13 20:50:25,574 p=5867 u=mistral | Friday 13 July 2018 20:50:25 -0400 (0:00:00.748) 0:03:48.762 *********** >2018-07-13 20:50:25,602 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:25,626 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:26,031 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"atime": 1531529323.702933, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "424de87cd6ae66547b285288742255731a46ab83", "ctime": 1531493185.9624753, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 921945, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1513292517.0, "nlink": 1, "path": "/lib/systemd/system/iscsid.socket", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 175, "uid": 0, "version": "1355952737", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-07-13 20:50:26,054 p=5867 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-07-13 20:50:26,054 p=5867 u=mistral | Friday 13 July 2018 20:50:26 -0400 (0:00:00.480) 0:03:49.242 *********** >2018-07-13 20:50:26,082 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:26,108 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:26,514 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "enabled": false, "name": "iscsid.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "sysinit.target -.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Backlog": "128", "Before": "iscsid.service sockets.target shutdown.target", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Open-iSCSI iscsid Socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "Documentation": "man:iscsid(8) man:iscsiadm(8)", "FragmentPath": "/usr/lib/systemd/system/iscsid.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "iscsid.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22966", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22966", "LimitSTACK": "18446744073709551615", "ListenStream": "@ISCSIADM_ABSTRACT_NAMESPACE", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "iscsid.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "Requires": "sysinit.target", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "iscsid.service", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "-.slice"}} >2018-07-13 20:50:26,536 p=5867 u=mistral | TASK [create persistent logs directory] **************************************** >2018-07-13 20:50:26,537 p=5867 u=mistral | Friday 13 July 2018 20:50:26 -0400 (0:00:00.482) 0:03:49.724 *********** >2018-07-13 20:50:26,566 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:26,591 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:26,933 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:26,958 p=5867 u=mistral | TASK [nova logs readme] ******************************************************** >2018-07-13 20:50:26,958 p=5867 u=mistral | Friday 13 July 2018 20:50:26 -0400 (0:00:00.421) 0:03:50.146 *********** >2018-07-13 20:50:26,987 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:27,012 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:27,627 p=5867 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "c2216cc4edf5d3ce90f10748c3243db4e1842a85", "msg": "Destination directory /var/log/nova does not exist"} >2018-07-13 20:50:27,627 p=5867 u=mistral | ...ignoring >2018-07-13 20:50:27,650 p=5867 u=mistral | TASK [Mount Nova NFS Share] **************************************************** >2018-07-13 20:50:27,650 p=5867 u=mistral | Friday 13 July 2018 20:50:27 -0400 (0:00:00.692) 0:03:50.838 *********** >2018-07-13 20:50:27,678 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:27,702 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:27,717 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:27,739 p=5867 u=mistral | TASK [create persistent directories] ******************************************* >2018-07-13 20:50:27,739 p=5867 u=mistral | Friday 13 July 2018 20:50:27 -0400 (0:00:00.088) 0:03:50.927 *********** >2018-07-13 20:50:27,769 p=5867 u=mistral | skipping: [controller-0] => (item=/var/lib/nova) => {"changed": false, "item": "/var/lib/nova", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:27,770 p=5867 u=mistral | skipping: [controller-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:27,796 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/lib/nova) => {"changed": false, "item": "/var/lib/nova", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:27,798 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:28,152 p=5867 u=mistral | ok: [compute-0] => (item=/var/lib/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/nova", "mode": "0755", "owner": "root", "path": "/var/lib/nova", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:28,474 p=5867 u=mistral | ok: [compute-0] => (item=/var/lib/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/libvirt", "mode": "0755", "owner": "root", "path": "/var/lib/libvirt", "secontext": "system_u:object_r:virt_var_lib_t:s0", "size": 104, "state": "directory", "uid": 0} >2018-07-13 20:50:28,498 p=5867 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-07-13 20:50:28,498 p=5867 u=mistral | Friday 13 July 2018 20:50:28 -0400 (0:00:00.759) 0:03:51.686 *********** >2018-07-13 20:50:28,531 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:28,556 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:28,897 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:28,920 p=5867 u=mistral | TASK [is Instance HA enabled] ************************************************** >2018-07-13 20:50:28,920 p=5867 u=mistral | Friday 13 July 2018 20:50:28 -0400 (0:00:00.421) 0:03:52.108 *********** >2018-07-13 20:50:28,950 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:28,974 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,018 p=5867 u=mistral | ok: [compute-0] => {"ansible_facts": {"instance_ha_enabled": false}, "changed": false} >2018-07-13 20:50:29,040 p=5867 u=mistral | TASK [prepare Instance HA script directory] ************************************ >2018-07-13 20:50:29,041 p=5867 u=mistral | Friday 13 July 2018 20:50:29 -0400 (0:00:00.120) 0:03:52.228 *********** >2018-07-13 20:50:29,068 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,091 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,113 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,133 p=5867 u=mistral | TASK [install Instance HA script that runs nova-compute] *********************** >2018-07-13 20:50:29,133 p=5867 u=mistral | Friday 13 July 2018 20:50:29 -0400 (0:00:00.092) 0:03:52.321 *********** >2018-07-13 20:50:29,162 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,188 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,203 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,224 p=5867 u=mistral | TASK [Get list of instance HA compute nodes] *********************************** >2018-07-13 20:50:29,224 p=5867 u=mistral | Friday 13 July 2018 20:50:29 -0400 (0:00:00.090) 0:03:52.412 *********** >2018-07-13 20:50:29,252 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,277 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,292 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,313 p=5867 u=mistral | TASK [If instance HA is enabled on the node activate the evacuation completed check] *** >2018-07-13 20:50:29,313 p=5867 u=mistral | Friday 13 July 2018 20:50:29 -0400 (0:00:00.089) 0:03:52.501 *********** >2018-07-13 20:50:29,341 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,366 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,385 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,410 p=5867 u=mistral | TASK [create libvirt persistent data directories] ****************************** >2018-07-13 20:50:29,410 p=5867 u=mistral | Friday 13 July 2018 20:50:29 -0400 (0:00:00.096) 0:03:52.598 *********** >2018-07-13 20:50:29,439 p=5867 u=mistral | skipping: [controller-0] => (item=/etc/libvirt) => {"changed": false, "item": "/etc/libvirt", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,440 p=5867 u=mistral | skipping: [controller-0] => (item=/etc/libvirt/secrets) => {"changed": false, "item": "/etc/libvirt/secrets", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,467 p=5867 u=mistral | skipping: [controller-0] => (item=/etc/libvirt/qemu) => {"changed": false, "item": "/etc/libvirt/qemu", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,468 p=5867 u=mistral | skipping: [controller-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,468 p=5867 u=mistral | skipping: [controller-0] => (item=/var/log/containers/libvirt) => {"changed": false, "item": "/var/log/containers/libvirt", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,470 p=5867 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt) => {"changed": false, "item": "/etc/libvirt", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,470 p=5867 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt/secrets) => {"changed": false, "item": "/etc/libvirt/secrets", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,471 p=5867 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt/qemu) => {"changed": false, "item": "/etc/libvirt/qemu", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,473 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,473 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/libvirt) => {"changed": false, "item": "/var/log/containers/libvirt", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:29,810 p=5867 u=mistral | ok: [compute-0] => (item=/etc/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt", "mode": "0700", "owner": "root", "path": "/etc/libvirt", "secontext": "system_u:object_r:virt_etc_t:s0", "size": 215, "state": "directory", "uid": 0} >2018-07-13 20:50:30,120 p=5867 u=mistral | ok: [compute-0] => (item=/etc/libvirt/secrets) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt/secrets", "mode": "0700", "owner": "root", "path": "/etc/libvirt/secrets", "secontext": "system_u:object_r:virt_etc_rw_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:30,441 p=5867 u=mistral | ok: [compute-0] => (item=/etc/libvirt/qemu) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt/qemu", "mode": "0700", "owner": "root", "path": "/etc/libvirt/qemu", "secontext": "system_u:object_r:virt_etc_rw_t:s0", "size": 22, "state": "directory", "uid": 0} >2018-07-13 20:50:30,760 p=5867 u=mistral | ok: [compute-0] => (item=/var/lib/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/libvirt", "mode": "0755", "owner": "root", "path": "/var/lib/libvirt", "secontext": "system_u:object_r:virt_var_lib_t:s0", "size": 104, "state": "directory", "uid": 0} >2018-07-13 20:50:31,078 p=5867 u=mistral | ok: [compute-0] => (item=/var/log/containers/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/libvirt", "mode": "0755", "owner": "root", "path": "/var/log/containers/libvirt", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:31,103 p=5867 u=mistral | TASK [ensure qemu group is present on the host] ******************************** >2018-07-13 20:50:31,104 p=5867 u=mistral | Friday 13 July 2018 20:50:31 -0400 (0:00:01.693) 0:03:54.292 *********** >2018-07-13 20:50:31,133 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:31,159 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:31,602 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "gid": 107, "name": "qemu", "state": "present", "system": false} >2018-07-13 20:50:31,624 p=5867 u=mistral | TASK [ensure qemu user is present on the host] ********************************* >2018-07-13 20:50:31,624 p=5867 u=mistral | Friday 13 July 2018 20:50:31 -0400 (0:00:00.520) 0:03:54.812 *********** >2018-07-13 20:50:31,652 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:31,678 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:32,255 p=5867 u=mistral | ok: [compute-0] => {"append": false, "changed": false, "comment": "qemu user", "group": 107, "home": "/", "move_home": false, "name": "qemu", "shell": "/sbin/nologin", "state": "present", "uid": 107} >2018-07-13 20:50:32,278 p=5867 u=mistral | TASK [create directory for vhost-user sockets with qemu ownership] ************* >2018-07-13 20:50:32,278 p=5867 u=mistral | Friday 13 July 2018 20:50:32 -0400 (0:00:00.654) 0:03:55.466 *********** >2018-07-13 20:50:32,310 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:32,336 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:32,691 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "gid": 107, "group": "qemu", "mode": "0755", "owner": "qemu", "path": "/var/lib/vhost_sockets", "secontext": "system_u:object_r:virt_cache_t:s0", "size": 6, "state": "directory", "uid": 107} >2018-07-13 20:50:32,715 p=5867 u=mistral | TASK [check if libvirt is installed] ******************************************* >2018-07-13 20:50:32,715 p=5867 u=mistral | Friday 13 July 2018 20:50:32 -0400 (0:00:00.436) 0:03:55.903 *********** >2018-07-13 20:50:32,747 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:32,776 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:33,153 p=5867 u=mistral | [WARNING]: Consider using the yum, dnf or zypper module rather than running >rpm. If you need to use command because yum, dnf or zypper is insufficient you >can add warn=False to this command task or set command_warnings=False in >ansible.cfg to get rid of this message. > >2018-07-13 20:50:33,154 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "cmd": ["/usr/bin/rpm", "-q", "libvirt-daemon"], "delta": "0:00:00.035726", "end": "2018-07-13 20:50:33.474804", "failed_when_result": false, "rc": 0, "start": "2018-07-13 20:50:33.439078", "stderr": "", "stderr_lines": [], "stdout": "libvirt-daemon-3.9.0-14.el7_5.6.x86_64", "stdout_lines": ["libvirt-daemon-3.9.0-14.el7_5.6.x86_64"]} >2018-07-13 20:50:33,178 p=5867 u=mistral | TASK [make sure libvirt services are disabled] ********************************* >2018-07-13 20:50:33,178 p=5867 u=mistral | Friday 13 July 2018 20:50:33 -0400 (0:00:00.462) 0:03:56.366 *********** >2018-07-13 20:50:33,211 p=5867 u=mistral | skipping: [controller-0] => (item=libvirtd.service) => {"changed": false, "item": "libvirtd.service", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:33,213 p=5867 u=mistral | skipping: [controller-0] => (item=virtlogd.socket) => {"changed": false, "item": "virtlogd.socket", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:33,245 p=5867 u=mistral | skipping: [ceph-0] => (item=libvirtd.service) => {"changed": false, "item": "libvirtd.service", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:33,247 p=5867 u=mistral | skipping: [ceph-0] => (item=virtlogd.socket) => {"changed": false, "item": "virtlogd.socket", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:33,614 p=5867 u=mistral | ok: [compute-0] => (item=libvirtd.service) => {"changed": false, "enabled": false, "item": "libvirtd.service", "name": "libvirtd.service", "state": "stopped", "status": {"ActiveEnterTimestamp": "Fri 2018-07-13 20:43:45 EDT", "ActiveEnterTimestampMonotonic": "5785333", "ActiveExitTimestamp": "Fri 2018-07-13 20:48:47 EDT", "ActiveExitTimestampMonotonic": "307741946", "ActiveState": "inactive", "After": "local-fs.target virtlogd.socket dbus.service iscsid.service remote-fs.target basic.target virtlockd.service apparmor.service network.target virtlockd.socket systemd-journald.socket system.slice virtlogd.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-07-13 20:43:45 EDT", "AssertTimestampMonotonic": "5531104", "Before": "shutdown.target libvirt-guests.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-07-13 20:43:45 EDT", "ConditionTimestampMonotonic": "5531104", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Virtualization daemon", "DevicePolicy": "auto", "Documentation": "man:libvirtd(8) https://libvirt.org", "EnvironmentFile": "/etc/sysconfig/libvirtd (ignore_errors=yes)", "ExecMainCode": "1", "ExecMainExitTimestamp": "Fri 2018-07-13 20:48:47 EDT", "ExecMainExitTimestampMonotonic": "307750143", "ExecMainPID": "1153", "ExecMainStartTimestamp": "Fri 2018-07-13 20:43:45 EDT", "ExecMainStartTimestampMonotonic": "5534176", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/libvirtd ; argv[]=/usr/sbin/libvirtd $LIBVIRTD_ARGS ; ignore_errors=no ; start_time=[Fri 2018-07-13 20:43:45 EDT] ; stop_time=[Fri 2018-07-13 20:48:47 EDT] ; pid=1153 ; code=exited ; status=0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/libvirtd.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "libvirtd.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Fri 2018-07-13 20:48:47 EDT", "InactiveEnterTimestampMonotonic": "307750441", "InactiveExitTimestamp": "Fri 2018-07-13 20:43:45 EDT", "InactiveExitTimestampMonotonic": "5534226", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "8192", "LimitNPROC": "22966", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22966", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "libvirtd.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "virtlockd.socket basic.target virtlogd.socket", "Restart": "on-failure", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "32768", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "disabled", "WantedBy": "libvirt-guests.service", "Wants": "system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-07-13 20:50:33,943 p=5867 u=mistral | ok: [compute-0] => (item=virtlogd.socket) => {"changed": false, "enabled": false, "item": "virtlogd.socket", "name": "virtlogd.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestamp": "Fri 2018-07-13 20:43:43 EDT", "ActiveEnterTimestampMonotonic": "3904881", "ActiveExitTimestamp": "Fri 2018-07-13 20:48:47 EDT", "ActiveExitTimestampMonotonic": "307945193", "ActiveState": "inactive", "After": "sysinit.target -.slice -.mount", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-07-13 20:43:43 EDT", "AssertTimestampMonotonic": "3904298", "Backlog": "128", "Before": "virtlogd.service sockets.target shutdown.target libvirtd.service", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-07-13 20:43:43 EDT", "ConditionTimestampMonotonic": "3904298", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Virtual machine log manager socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "FragmentPath": "/usr/lib/systemd/system/virtlogd.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "virtlogd.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Fri 2018-07-13 20:48:47 EDT", "InactiveEnterTimestampMonotonic": "307945193", "InactiveExitTimestamp": "Fri 2018-07-13 20:43:43 EDT", "InactiveExitTimestampMonotonic": "3904881", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22966", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22966", "LimitSTACK": "18446744073709551615", "ListenStream": "/var/run/libvirt/virtlogd-sock", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "virtlogd.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "RequiredBy": "virtlogd.service libvirtd.service", "Requires": "sysinit.target -.mount", "RequiresMountsFor": "/var/run/libvirt/virtlogd-sock", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "virtlogd.service", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "disabled", "Wants": "-.slice"}} >2018-07-13 20:50:33,970 p=5867 u=mistral | TASK [NTP settings] ************************************************************ >2018-07-13 20:50:33,970 p=5867 u=mistral | Friday 13 July 2018 20:50:33 -0400 (0:00:00.792) 0:03:57.158 *********** >2018-07-13 20:50:33,999 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:34,027 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:34,061 p=5867 u=mistral | ok: [compute-0] => {"ansible_facts": {"ntp_install_packages": false, "ntp_servers": ["10.35.255.6"]}, "changed": false} >2018-07-13 20:50:34,084 p=5867 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-07-13 20:50:34,084 p=5867 u=mistral | Friday 13 July 2018 20:50:34 -0400 (0:00:00.113) 0:03:57.272 *********** >2018-07-13 20:50:34,114 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:34,141 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:34,155 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:34,179 p=5867 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-07-13 20:50:34,179 p=5867 u=mistral | Friday 13 July 2018 20:50:34 -0400 (0:00:00.094) 0:03:57.367 *********** >2018-07-13 20:50:34,209 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:34,235 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:40,837 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "cmd": ["ntpdate", "-u", "10.35.255.6"], "delta": "0:00:06.257989", "end": "2018-07-13 20:50:41.161973", "rc": 0, "start": "2018-07-13 20:50:34.903984", "stderr": "", "stderr_lines": [], "stdout": "13 Jul 20:50:41 ntpdate[18386]: adjust time server 10.35.255.6 offset 0.195714 sec", "stdout_lines": ["13 Jul 20:50:41 ntpdate[18386]: adjust time server 10.35.255.6 offset 0.195714 sec"]} >2018-07-13 20:50:40,900 p=5867 u=mistral | TASK [create persistent directories] ******************************************* >2018-07-13 20:50:40,900 p=5867 u=mistral | Friday 13 July 2018 20:50:40 -0400 (0:00:06.721) 0:04:04.088 *********** >2018-07-13 20:50:40,931 p=5867 u=mistral | skipping: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:40,933 p=5867 u=mistral | skipping: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:40,960 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:40,961 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:40,978 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:40,980 p=5867 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,004 p=5867 u=mistral | TASK [cinder logs readme] ****************************************************** >2018-07-13 20:50:41,004 p=5867 u=mistral | Friday 13 July 2018 20:50:41 -0400 (0:00:00.104) 0:04:04.192 *********** >2018-07-13 20:50:41,032 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,059 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,071 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,093 p=5867 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-07-13 20:50:41,093 p=5867 u=mistral | Friday 13 July 2018 20:50:41 -0400 (0:00:00.088) 0:04:04.281 *********** >2018-07-13 20:50:41,120 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,145 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,157 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,181 p=5867 u=mistral | TASK [cinder_enable_iscsi_backend fact] **************************************** >2018-07-13 20:50:41,181 p=5867 u=mistral | Friday 13 July 2018 20:50:41 -0400 (0:00:00.088) 0:04:04.369 *********** >2018-07-13 20:50:41,209 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,235 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,248 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,271 p=5867 u=mistral | TASK [cinder create LVM volume group dd] *************************************** >2018-07-13 20:50:41,271 p=5867 u=mistral | Friday 13 July 2018 20:50:41 -0400 (0:00:00.089) 0:04:04.459 *********** >2018-07-13 20:50:41,302 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,329 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,341 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,363 p=5867 u=mistral | TASK [cinder create LVM volume group] ****************************************** >2018-07-13 20:50:41,363 p=5867 u=mistral | Friday 13 July 2018 20:50:41 -0400 (0:00:00.092) 0:04:04.551 *********** >2018-07-13 20:50:41,391 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,416 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,428 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,450 p=5867 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-07-13 20:50:41,450 p=5867 u=mistral | Friday 13 July 2018 20:50:41 -0400 (0:00:00.086) 0:04:04.638 *********** >2018-07-13 20:50:41,480 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,507 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,518 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,540 p=5867 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-07-13 20:50:41,540 p=5867 u=mistral | Friday 13 July 2018 20:50:41 -0400 (0:00:00.090) 0:04:04.728 *********** >2018-07-13 20:50:41,568 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,593 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,605 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,626 p=5867 u=mistral | TASK [NTP settings] ************************************************************ >2018-07-13 20:50:41,626 p=5867 u=mistral | Friday 13 July 2018 20:50:41 -0400 (0:00:00.085) 0:04:04.814 *********** >2018-07-13 20:50:41,653 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,679 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,691 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,713 p=5867 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-07-13 20:50:41,713 p=5867 u=mistral | Friday 13 July 2018 20:50:41 -0400 (0:00:00.086) 0:04:04.901 *********** >2018-07-13 20:50:41,740 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,768 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,780 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,802 p=5867 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-07-13 20:50:41,803 p=5867 u=mistral | Friday 13 July 2018 20:50:41 -0400 (0:00:00.089) 0:04:04.990 *********** >2018-07-13 20:50:41,831 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,857 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,869 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,891 p=5867 u=mistral | TASK [NTP settings] ************************************************************ >2018-07-13 20:50:41,891 p=5867 u=mistral | Friday 13 July 2018 20:50:41 -0400 (0:00:00.088) 0:04:05.079 *********** >2018-07-13 20:50:41,920 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,946 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,958 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:41,981 p=5867 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-07-13 20:50:41,981 p=5867 u=mistral | Friday 13 July 2018 20:50:41 -0400 (0:00:00.089) 0:04:05.169 *********** >2018-07-13 20:50:42,010 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,037 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,054 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,079 p=5867 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-07-13 20:50:42,079 p=5867 u=mistral | Friday 13 July 2018 20:50:42 -0400 (0:00:00.098) 0:04:05.267 *********** >2018-07-13 20:50:42,110 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,135 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,148 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,170 p=5867 u=mistral | TASK [create persistent directories] ******************************************* >2018-07-13 20:50:42,170 p=5867 u=mistral | Friday 13 July 2018 20:50:42 -0400 (0:00:00.091) 0:04:05.358 *********** >2018-07-13 20:50:42,201 p=5867 u=mistral | skipping: [controller-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,202 p=5867 u=mistral | skipping: [controller-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,203 p=5867 u=mistral | skipping: [controller-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,232 p=5867 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,233 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,233 p=5867 u=mistral | skipping: [ceph-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,247 p=5867 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,252 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,257 p=5867 u=mistral | skipping: [compute-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,280 p=5867 u=mistral | TASK [Set swift_use_local_disks fact] ****************************************** >2018-07-13 20:50:42,280 p=5867 u=mistral | Friday 13 July 2018 20:50:42 -0400 (0:00:00.109) 0:04:05.468 *********** >2018-07-13 20:50:42,310 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,337 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,350 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,372 p=5867 u=mistral | TASK [Create Swift d1 directory if needed] ************************************* >2018-07-13 20:50:42,372 p=5867 u=mistral | Friday 13 July 2018 20:50:42 -0400 (0:00:00.091) 0:04:05.560 *********** >2018-07-13 20:50:42,405 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,433 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,447 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,469 p=5867 u=mistral | TASK [Create swift logging symlink] ******************************************** >2018-07-13 20:50:42,469 p=5867 u=mistral | Friday 13 July 2018 20:50:42 -0400 (0:00:00.097) 0:04:05.657 *********** >2018-07-13 20:50:42,498 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,523 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,535 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,558 p=5867 u=mistral | TASK [swift logs readme] ******************************************************* >2018-07-13 20:50:42,558 p=5867 u=mistral | Friday 13 July 2018 20:50:42 -0400 (0:00:00.088) 0:04:05.746 *********** >2018-07-13 20:50:42,586 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,611 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,624 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,648 p=5867 u=mistral | TASK [Format SwiftRawDisks] **************************************************** >2018-07-13 20:50:42,648 p=5867 u=mistral | Friday 13 July 2018 20:50:42 -0400 (0:00:00.090) 0:04:05.836 *********** >2018-07-13 20:50:42,738 p=5867 u=mistral | TASK [Mount devices defined in SwiftRawDisks] ********************************** >2018-07-13 20:50:42,738 p=5867 u=mistral | Friday 13 July 2018 20:50:42 -0400 (0:00:00.089) 0:04:05.926 *********** >2018-07-13 20:50:42,820 p=5867 u=mistral | TASK [NTP settings] ************************************************************ >2018-07-13 20:50:42,820 p=5867 u=mistral | Friday 13 July 2018 20:50:42 -0400 (0:00:00.082) 0:04:06.008 *********** >2018-07-13 20:50:42,851 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,889 p=5867 u=mistral | ok: [ceph-0] => {"ansible_facts": {"ntp_install_packages": false, "ntp_servers": ["10.35.255.6"]}, "changed": false} >2018-07-13 20:50:42,892 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,914 p=5867 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-07-13 20:50:42,914 p=5867 u=mistral | Friday 13 July 2018 20:50:42 -0400 (0:00:00.094) 0:04:06.102 *********** >2018-07-13 20:50:42,943 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,971 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:42,988 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:43,013 p=5867 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-07-13 20:50:43,013 p=5867 u=mistral | Friday 13 July 2018 20:50:43 -0400 (0:00:00.098) 0:04:06.201 *********** >2018-07-13 20:50:43,043 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:43,083 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:49,626 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": ["ntpdate", "-u", "10.35.255.6"], "delta": "0:00:06.257416", "end": "2018-07-13 20:50:50.151126", "rc": 0, "start": "2018-07-13 20:50:43.893710", "stderr": "", "stderr_lines": [], "stdout": "13 Jul 20:50:50 ntpdate[15643]: adjust time server 10.35.255.6 offset -0.001402 sec", "stdout_lines": ["13 Jul 20:50:50 ntpdate[15643]: adjust time server 10.35.255.6 offset -0.001402 sec"]} >2018-07-13 20:50:49,633 p=5867 u=mistral | PLAY [External deployment step 1] ********************************************** >2018-07-13 20:50:49,654 p=5867 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-07-13 20:50:49,654 p=5867 u=mistral | Friday 13 July 2018 20:50:49 -0400 (0:00:06.640) 0:04:12.842 *********** >2018-07-13 20:50:49,690 p=5867 u=mistral | ok: [undercloud] => {"ansible_facts": {"blacklisted_hostnames": []}, "changed": false} >2018-07-13 20:50:49,708 p=5867 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-07-13 20:50:49,708 p=5867 u=mistral | Friday 13 July 2018 20:50:49 -0400 (0:00:00.054) 0:04:12.896 *********** >2018-07-13 20:50:49,932 p=5867 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/group_vars) => {"changed": true, "gid": 985, "group": "mistral", "item": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/group_vars", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/group_vars", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 988} >2018-07-13 20:50:50,103 p=5867 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/host_vars) => {"changed": true, "gid": 985, "group": "mistral", "item": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/host_vars", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/host_vars", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 988} >2018-07-13 20:50:50,264 p=5867 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/fetch_dir) => {"changed": true, "gid": 985, "group": "mistral", "item": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/fetch_dir", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/fetch_dir", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 988} >2018-07-13 20:50:50,284 p=5867 u=mistral | TASK [generate inventory] ****************************************************** >2018-07-13 20:50:50,284 p=5867 u=mistral | Friday 13 July 2018 20:50:50 -0400 (0:00:00.575) 0:04:13.472 *********** >2018-07-13 20:50:50,893 p=5867 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "636d7aedee4ac4c79f50ce339f01793925e3fcc0", "dest": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/inventory.yml", "gid": 985, "group": "mistral", "md5sum": "c98f2965e6da8ff67a52281d5a9ec339", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 524, "src": "/tmp/ansible-/ansible-tmp-1531529450.58-247189033540047/source", "state": "file", "uid": 988} >2018-07-13 20:50:50,911 p=5867 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-07-13 20:50:50,911 p=5867 u=mistral | Friday 13 July 2018 20:50:50 -0400 (0:00:00.627) 0:04:14.099 *********** >2018-07-13 20:50:51,005 p=5867 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_all": {"ceph_conf_overrides": {"global": {"osd_pool_default_pg_num": 32, "osd_pool_default_pgp_num": 32, "osd_pool_default_size": 1, "rgw_keystone_accepted_roles": "Member, admin", "rgw_keystone_admin_domain": "default", "rgw_keystone_admin_password": "hywqtiijOKrnVaB7y2xIawB3Q", "rgw_keystone_admin_project": "service", "rgw_keystone_admin_user": "swift", "rgw_keystone_api_version": 3, "rgw_keystone_implicit_tenants": "true", "rgw_keystone_revocation_interval": "0", "rgw_keystone_url": "http://172.17.1.14:5000", "rgw_s3_auth_use_keystone": "true"}}, "ceph_docker_image": "rhceph", "ceph_docker_image_tag": "3-9", "ceph_docker_registry": "192.168.24.1:8787", "ceph_origin": "distro", "ceph_stable": true, "cluster": "ceph", "cluster_network": "172.17.4.0/24", "containerized_deployment": true, "docker": true, "fsid": "60442a52-86fa-11e8-982e-525400401c2c", "generate_fsid": false, "ip_version": "ipv4", "keys": [{"caps": {"mgr": "allow *", "mon": "profile rbd", "osd": "profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics"}, "key": "AQCOP0lbAAAAABAAlwoiCh/bQzLpXLvjP1FIBQ==", "mode": "0600", "name": "client.openstack"}, {"caps": {"mds": "allow *", "mgr": "allow *", "mon": "allow r, allow command \\\"auth del\\\", allow command \\\"auth caps\\\", allow command \\\"auth get\\\", allow command \\\"auth get-or-create\\\"", "osd": "allow rw"}, "key": "AQCOP0lbAAAAABAAFcr2tgt4E+HFMdZiYeW8qA==", "mode": "0600", "name": "client.manila"}, {"caps": {"mgr": "allow *", "mon": "allow rw", "osd": "allow rwx"}, "key": "AQCOP0lbAAAAABAAvll71Vyp19bQEEcM9Iu+Dg==", "mode": "0600", "name": "client.radosgw"}], "monitor_address_block": "172.17.3.0/24", "ntp_service_enabled": false, "openstack_config": true, "openstack_keys": [{"caps": {"mgr": "allow *", "mon": "profile rbd", "osd": "profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics"}, "key": "AQCOP0lbAAAAABAAlwoiCh/bQzLpXLvjP1FIBQ==", "mode": "0600", "name": "client.openstack"}, {"caps": {"mds": "allow *", "mgr": "allow *", "mon": "allow r, allow command \\\"auth del\\\", allow command \\\"auth caps\\\", allow command \\\"auth get\\\", allow command \\\"auth get-or-create\\\"", "osd": "allow rw"}, "key": "AQCOP0lbAAAAABAAFcr2tgt4E+HFMdZiYeW8qA==", "mode": "0600", "name": "client.manila"}, {"caps": {"mgr": "allow *", "mon": "allow rw", "osd": "allow rwx"}, "key": "AQCOP0lbAAAAABAAvll71Vyp19bQEEcM9Iu+Dg==", "mode": "0600", "name": "client.radosgw"}], "openstack_pools": [{"application": "rbd", "name": "images", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "openstack_gnocchi", "name": "metrics", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "rbd", "name": "backups", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "rbd", "name": "vms", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "rbd", "name": "volumes", "pg_num": 32, "rule_name": "replicated_rule"}], "pools": [], "public_network": "172.17.3.0/24", "user_config": true}}, "changed": false} >2018-07-13 20:50:51,071 p=5867 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-07-13 20:50:51,071 p=5867 u=mistral | Friday 13 July 2018 20:50:51 -0400 (0:00:00.159) 0:04:14.259 *********** >2018-07-13 20:50:51,425 p=5867 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "49e0c05645dd766b5bd049d565ceae768116da9e", "dest": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/group_vars/all.yml", "gid": 985, "group": "mistral", "md5sum": "03edfbf0e148836c18401e1f675f8c06", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 3093, "src": "/tmp/ansible-/ansible-tmp-1531529451.12-168153653075884/source", "state": "file", "uid": 988} >2018-07-13 20:50:51,442 p=5867 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-07-13 20:50:51,442 p=5867 u=mistral | Friday 13 July 2018 20:50:51 -0400 (0:00:00.370) 0:04:14.630 *********** >2018-07-13 20:50:51,475 p=5867 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_extra_vars": {"fetch_directory": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/fetch_dir", "ireallymeanit": "yes"}}, "changed": false} >2018-07-13 20:50:51,492 p=5867 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-07-13 20:50:51,492 p=5867 u=mistral | Friday 13 July 2018 20:50:51 -0400 (0:00:00.049) 0:04:14.680 *********** >2018-07-13 20:50:51,826 p=5867 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "9dcad2bc17e977e20328c33b59a68eba2e2b0d33", "dest": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/extra_vars.yml", "gid": 985, "group": "mistral", "md5sum": "4dff2919f53832bf8e2eaa8d5a87a8a4", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 115, "src": "/tmp/ansible-/ansible-tmp-1531529451.52-135004606416494/source", "state": "file", "uid": 988} >2018-07-13 20:50:51,843 p=5867 u=mistral | TASK [generate nodes-uuid data file] ******************************************* >2018-07-13 20:50:51,843 p=5867 u=mistral | Friday 13 July 2018 20:50:51 -0400 (0:00:00.351) 0:04:15.031 *********** >2018-07-13 20:50:52,178 p=5867 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/nodes_uuid_data.json", "gid": 985, "group": "mistral", "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/tmp/ansible-/ansible-tmp-1531529451.88-26305221609906/source", "state": "file", "uid": 988} >2018-07-13 20:50:52,195 p=5867 u=mistral | TASK [generate nodes-uuid playbook] ******************************************** >2018-07-13 20:50:52,195 p=5867 u=mistral | Friday 13 July 2018 20:50:52 -0400 (0:00:00.351) 0:04:15.383 *********** >2018-07-13 20:50:52,530 p=5867 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "6c33ed24204e121a9dc01ad3bbdc2f5db9524417", "dest": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/nodes_uuid_playbook.yml", "gid": 985, "group": "mistral", "md5sum": "41c95f3964243a07846d008f156087f6", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 554, "src": "/tmp/ansible-/ansible-tmp-1531529452.23-176987368214939/source", "state": "file", "uid": 988} >2018-07-13 20:50:52,549 p=5867 u=mistral | TASK [run nodes-uuid] ********************************************************** >2018-07-13 20:50:52,549 p=5867 u=mistral | Friday 13 July 2018 20:50:52 -0400 (0:00:00.353) 0:04:15.737 *********** >2018-07-13 20:50:52,568 p=5867 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:52,587 p=5867 u=mistral | TASK [set ceph-ansible verbosity] ********************************************** >2018-07-13 20:50:52,587 p=5867 u=mistral | Friday 13 July 2018 20:50:52 -0400 (0:00:00.037) 0:04:15.775 *********** >2018-07-13 20:50:52,604 p=5867 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:52,623 p=5867 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-07-13 20:50:52,624 p=5867 u=mistral | Friday 13 July 2018 20:50:52 -0400 (0:00:00.036) 0:04:15.812 *********** >2018-07-13 20:50:52,643 p=5867 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:52,662 p=5867 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-07-13 20:50:52,662 p=5867 u=mistral | Friday 13 July 2018 20:50:52 -0400 (0:00:00.038) 0:04:15.850 *********** >2018-07-13 20:50:52,685 p=5867 u=mistral | skipping: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": false, "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "skip_reason": "Conditional result was False"} >2018-07-13 20:50:52,703 p=5867 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-07-13 20:50:52,703 p=5867 u=mistral | Friday 13 July 2018 20:50:52 -0400 (0:00:00.041) 0:04:15.891 *********** >2018-07-13 20:50:52,737 p=5867 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_mgrs": {"ceph_mgr_docker_extra_env": "-e MGR_DASHBOARD=0"}}, "changed": false} >2018-07-13 20:50:52,754 p=5867 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-07-13 20:50:52,754 p=5867 u=mistral | Friday 13 July 2018 20:50:52 -0400 (0:00:00.050) 0:04:15.942 *********** >2018-07-13 20:50:53,095 p=5867 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "06d130f3471f2ac09bb0161450878cf64bafd8af", "dest": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/group_vars/mgrs.yml", "gid": 985, "group": "mistral", "md5sum": "0d3c03a4186ad82120a728e0470a49d9", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 46, "src": "/tmp/ansible-/ansible-tmp-1531529452.79-223532335575833/source", "state": "file", "uid": 988} >2018-07-13 20:50:53,116 p=5867 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-07-13 20:50:53,117 p=5867 u=mistral | Friday 13 July 2018 20:50:53 -0400 (0:00:00.362) 0:04:16.305 *********** >2018-07-13 20:50:53,149 p=5867 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_mons": {"admin_secret": "AQCOP0lbAAAAABAA6ZWM1pQ4YK1kNZWnsfo2DQ==", "monitor_secret": "AQCOP0lbAAAAABAAZKSQwjy7tHAiDWgTh7UMCg=="}}, "changed": false} >2018-07-13 20:50:53,168 p=5867 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-07-13 20:50:53,168 p=5867 u=mistral | Friday 13 July 2018 20:50:53 -0400 (0:00:00.051) 0:04:16.356 *********** >2018-07-13 20:50:53,509 p=5867 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "8caa975cc4faebdcc2eab758dc96d57e49f851c8", "dest": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/group_vars/mons.yml", "gid": 985, "group": "mistral", "md5sum": "3ef6fa22dde778d96059429596b548a3", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 112, "src": "/tmp/ansible-/ansible-tmp-1531529453.2-71217492813780/source", "state": "file", "uid": 988} >2018-07-13 20:50:53,528 p=5867 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-07-13 20:50:53,528 p=5867 u=mistral | Friday 13 July 2018 20:50:53 -0400 (0:00:00.359) 0:04:16.716 *********** >2018-07-13 20:50:53,558 p=5867 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_clients": {}}, "changed": false} >2018-07-13 20:50:53,576 p=5867 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-07-13 20:50:53,576 p=5867 u=mistral | Friday 13 July 2018 20:50:53 -0400 (0:00:00.048) 0:04:16.764 *********** >2018-07-13 20:50:53,914 p=5867 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/group_vars/clients.yml", "gid": 985, "group": "mistral", "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/tmp/ansible-/ansible-tmp-1531529453.6-72703593654810/source", "state": "file", "uid": 988} >2018-07-13 20:50:53,932 p=5867 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-07-13 20:50:53,933 p=5867 u=mistral | Friday 13 July 2018 20:50:53 -0400 (0:00:00.356) 0:04:17.120 *********** >2018-07-13 20:50:53,965 p=5867 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_osds": {"devices": ["/dev/vdb", "/dev/vdc", "/dev/vdd", "/dev/vde", "/dev/vdf"], "journal_size": 512, "osd_objectstore": "filestore", "osd_scenario": "collocated"}}, "changed": false} >2018-07-13 20:50:53,983 p=5867 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-07-13 20:50:53,984 p=5867 u=mistral | Friday 13 July 2018 20:50:53 -0400 (0:00:00.050) 0:04:17.171 *********** >2018-07-13 20:50:54,336 p=5867 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "a209fd8d503be2b45dc87935a930c08a563088cb", "dest": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/group_vars/osds.yml", "gid": 985, "group": "mistral", "md5sum": "114fe63af169ecb1b28b951266282ba7", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 134, "src": "/tmp/ansible-/ansible-tmp-1531529454.02-273948429850860/source", "state": "file", "uid": 988} >2018-07-13 20:50:54,341 p=5867 u=mistral | PLAY [Overcloud deploy step tasks for 1] *************************************** >2018-07-13 20:50:54,364 p=5867 u=mistral | TASK [include_role] ************************************************************ >2018-07-13 20:50:54,364 p=5867 u=mistral | Friday 13 July 2018 20:50:54 -0400 (0:00:00.380) 0:04:17.552 *********** >2018-07-13 20:50:54,417 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:54,430 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:54,497 p=5867 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-07-13 20:50:54,498 p=5867 u=mistral | Friday 13 July 2018 20:50:54 -0400 (0:00:00.133) 0:04:17.686 *********** >2018-07-13 20:50:54,971 p=5867 u=mistral | changed: [controller-0] => {"changed": true} >2018-07-13 20:50:54,995 p=5867 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-07-13 20:50:54,995 p=5867 u=mistral | Friday 13 July 2018 20:50:54 -0400 (0:00:00.497) 0:04:18.183 *********** >2018-07-13 20:50:55,670 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-68.gitdded712.el7.x86_64 providing docker is already installed"]} >2018-07-13 20:50:55,693 p=5867 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-07-13 20:50:55,693 p=5867 u=mistral | Friday 13 July 2018 20:50:55 -0400 (0:00:00.697) 0:04:18.881 *********** >2018-07-13 20:50:56,049 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:56,072 p=5867 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-07-13 20:50:56,073 p=5867 u=mistral | Friday 13 July 2018 20:50:56 -0400 (0:00:00.379) 0:04:19.260 *********** >2018-07-13 20:50:56,521 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-07-13 20:50:56,544 p=5867 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-07-13 20:50:56,544 p=5867 u=mistral | Friday 13 July 2018 20:50:56 -0400 (0:00:00.471) 0:04:19.732 *********** >2018-07-13 20:50:56,995 p=5867 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-07-13 20:50:57,016 p=5867 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-07-13 20:50:57,016 p=5867 u=mistral | Friday 13 July 2018 20:50:57 -0400 (0:00:00.471) 0:04:20.204 *********** >2018-07-13 20:50:57,380 p=5867 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-07-13 20:50:57,403 p=5867 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-07-13 20:50:57,404 p=5867 u=mistral | Friday 13 July 2018 20:50:57 -0400 (0:00:00.387) 0:04:20.591 *********** >2018-07-13 20:50:57,770 p=5867 u=mistral | changed: [controller-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:50:57,797 p=5867 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-07-13 20:50:57,798 p=5867 u=mistral | Friday 13 July 2018 20:50:57 -0400 (0:00:00.393) 0:04:20.985 *********** >2018-07-13 20:50:58,448 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529457.84-263050941631448/source", "state": "file", "uid": 0} >2018-07-13 20:50:58,471 p=5867 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-07-13 20:50:58,471 p=5867 u=mistral | Friday 13 July 2018 20:50:58 -0400 (0:00:00.673) 0:04:21.659 *********** >2018-07-13 20:50:58,827 p=5867 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-07-13 20:50:58,850 p=5867 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-07-13 20:50:58,850 p=5867 u=mistral | Friday 13 July 2018 20:50:58 -0400 (0:00:00.378) 0:04:22.038 *********** >2018-07-13 20:50:59,209 p=5867 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-07-13 20:50:59,232 p=5867 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-07-13 20:50:59,232 p=5867 u=mistral | Friday 13 July 2018 20:50:59 -0400 (0:00:00.382) 0:04:22.420 *********** >2018-07-13 20:50:59,615 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-07-13 20:50:59,639 p=5867 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-07-13 20:50:59,640 p=5867 u=mistral | Friday 13 July 2018 20:50:59 -0400 (0:00:00.407) 0:04:22.827 *********** >2018-07-13 20:50:59,664 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:50:59,687 p=5867 u=mistral | TASK [container-registry : force systemd to reread configs] ******************** >2018-07-13 20:50:59,687 p=5867 u=mistral | Friday 13 July 2018 20:50:59 -0400 (0:00:00.047) 0:04:22.875 *********** >2018-07-13 20:51:00,125 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "name": null, "status": {}} >2018-07-13 20:51:00,151 p=5867 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-07-13 20:51:00,151 p=5867 u=mistral | Friday 13 July 2018 20:51:00 -0400 (0:00:00.464) 0:04:23.339 *********** >2018-07-13 20:51:01,900 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "network.target registries.service rhel-push-plugin.socket docker-storage-setup.service basic.target systemd-journald.socket system.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target paunch-container-shutdown.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127792", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "basic.target docker-cleanup.timer rhel-push-plugin.socket registries.service", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-07-13 20:51:01,923 p=5867 u=mistral | TASK [include_role] ************************************************************ >2018-07-13 20:51:01,923 p=5867 u=mistral | Friday 13 July 2018 20:51:01 -0400 (0:00:01.771) 0:04:25.111 *********** >2018-07-13 20:51:01,953 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:51:01,977 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:51:02,032 p=5867 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-07-13 20:51:02,033 p=5867 u=mistral | Friday 13 July 2018 20:51:02 -0400 (0:00:00.109) 0:04:25.220 *********** >2018-07-13 20:51:02,398 p=5867 u=mistral | changed: [compute-0] => {"changed": true} >2018-07-13 20:51:02,416 p=5867 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-07-13 20:51:02,417 p=5867 u=mistral | Friday 13 July 2018 20:51:02 -0400 (0:00:00.383) 0:04:25.604 *********** >2018-07-13 20:51:03,056 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-68.gitdded712.el7.x86_64 providing docker is already installed"]} >2018-07-13 20:51:03,075 p=5867 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-07-13 20:51:03,075 p=5867 u=mistral | Friday 13 July 2018 20:51:03 -0400 (0:00:00.658) 0:04:26.263 *********** >2018-07-13 20:51:03,431 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:51:03,452 p=5867 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-07-13 20:51:03,452 p=5867 u=mistral | Friday 13 July 2018 20:51:03 -0400 (0:00:00.377) 0:04:26.640 *********** >2018-07-13 20:51:03,797 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-07-13 20:51:03,815 p=5867 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-07-13 20:51:03,815 p=5867 u=mistral | Friday 13 July 2018 20:51:03 -0400 (0:00:00.363) 0:04:27.003 *********** >2018-07-13 20:51:04,217 p=5867 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-07-13 20:51:04,234 p=5867 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-07-13 20:51:04,235 p=5867 u=mistral | Friday 13 July 2018 20:51:04 -0400 (0:00:00.419) 0:04:27.422 *********** >2018-07-13 20:51:04,650 p=5867 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-07-13 20:51:04,668 p=5867 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-07-13 20:51:04,668 p=5867 u=mistral | Friday 13 July 2018 20:51:04 -0400 (0:00:00.433) 0:04:27.856 *********** >2018-07-13 20:51:05,092 p=5867 u=mistral | changed: [compute-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:51:05,119 p=5867 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-07-13 20:51:05,119 p=5867 u=mistral | Friday 13 July 2018 20:51:05 -0400 (0:00:00.450) 0:04:28.307 *********** >2018-07-13 20:51:05,819 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529465.21-45214507853426/source", "state": "file", "uid": 0} >2018-07-13 20:51:05,836 p=5867 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-07-13 20:51:05,836 p=5867 u=mistral | Friday 13 July 2018 20:51:05 -0400 (0:00:00.716) 0:04:29.024 *********** >2018-07-13 20:51:06,199 p=5867 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-07-13 20:51:06,217 p=5867 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-07-13 20:51:06,217 p=5867 u=mistral | Friday 13 July 2018 20:51:06 -0400 (0:00:00.381) 0:04:29.405 *********** >2018-07-13 20:51:06,581 p=5867 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-07-13 20:51:06,599 p=5867 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-07-13 20:51:06,599 p=5867 u=mistral | Friday 13 July 2018 20:51:06 -0400 (0:00:00.381) 0:04:29.787 *********** >2018-07-13 20:51:06,965 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-07-13 20:51:06,984 p=5867 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-07-13 20:51:06,984 p=5867 u=mistral | Friday 13 July 2018 20:51:06 -0400 (0:00:00.385) 0:04:30.172 *********** >2018-07-13 20:51:07,009 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:51:07,026 p=5867 u=mistral | TASK [container-registry : force systemd to reread configs] ******************** >2018-07-13 20:51:07,026 p=5867 u=mistral | Friday 13 July 2018 20:51:07 -0400 (0:00:00.042) 0:04:30.214 *********** >2018-07-13 20:51:07,426 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "name": null, "status": {}} >2018-07-13 20:51:07,445 p=5867 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-07-13 20:51:07,445 p=5867 u=mistral | Friday 13 July 2018 20:51:07 -0400 (0:00:00.418) 0:04:30.633 *********** >2018-07-13 20:51:09,158 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "rhel-push-plugin.socket basic.target docker-storage-setup.service network.target systemd-journald.socket system.slice registries.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target paunch-container-shutdown.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22966", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "basic.target rhel-push-plugin.socket registries.service docker-cleanup.timer", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-07-13 20:51:09,182 p=5867 u=mistral | TASK [include_role] ************************************************************ >2018-07-13 20:51:09,182 p=5867 u=mistral | Friday 13 July 2018 20:51:09 -0400 (0:00:01.736) 0:04:32.370 *********** >2018-07-13 20:51:09,212 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:51:09,240 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:51:09,252 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:51:09,275 p=5867 u=mistral | TASK [include_role] ************************************************************ >2018-07-13 20:51:09,275 p=5867 u=mistral | Friday 13 July 2018 20:51:09 -0400 (0:00:00.092) 0:04:32.463 *********** >2018-07-13 20:51:09,303 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:51:09,329 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:51:09,342 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:51:09,366 p=5867 u=mistral | TASK [include_role] ************************************************************ >2018-07-13 20:51:09,366 p=5867 u=mistral | Friday 13 July 2018 20:51:09 -0400 (0:00:00.091) 0:04:32.554 *********** >2018-07-13 20:51:09,395 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:51:09,434 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:51:09,478 p=5867 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-07-13 20:51:09,478 p=5867 u=mistral | Friday 13 July 2018 20:51:09 -0400 (0:00:00.111) 0:04:32.666 *********** >2018-07-13 20:51:09,824 p=5867 u=mistral | changed: [ceph-0] => {"changed": true} >2018-07-13 20:51:09,844 p=5867 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-07-13 20:51:09,844 p=5867 u=mistral | Friday 13 July 2018 20:51:09 -0400 (0:00:00.366) 0:04:33.032 *********** >2018-07-13 20:51:10,468 p=5867 u=mistral | ok: [ceph-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-68.gitdded712.el7.x86_64 providing docker is already installed"]} >2018-07-13 20:51:10,488 p=5867 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-07-13 20:51:10,488 p=5867 u=mistral | Friday 13 July 2018 20:51:10 -0400 (0:00:00.643) 0:04:33.676 *********** >2018-07-13 20:51:10,822 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:51:10,842 p=5867 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-07-13 20:51:10,842 p=5867 u=mistral | Friday 13 July 2018 20:51:10 -0400 (0:00:00.354) 0:04:34.030 *********** >2018-07-13 20:51:11,185 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-07-13 20:51:11,204 p=5867 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-07-13 20:51:11,204 p=5867 u=mistral | Friday 13 July 2018 20:51:11 -0400 (0:00:00.361) 0:04:34.392 *********** >2018-07-13 20:51:11,558 p=5867 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-07-13 20:51:11,576 p=5867 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-07-13 20:51:11,577 p=5867 u=mistral | Friday 13 July 2018 20:51:11 -0400 (0:00:00.372) 0:04:34.765 *********** >2018-07-13 20:51:11,924 p=5867 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-07-13 20:51:11,943 p=5867 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-07-13 20:51:11,944 p=5867 u=mistral | Friday 13 July 2018 20:51:11 -0400 (0:00:00.366) 0:04:35.132 *********** >2018-07-13 20:51:12,297 p=5867 u=mistral | changed: [ceph-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:51:12,327 p=5867 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-07-13 20:51:12,327 p=5867 u=mistral | Friday 13 July 2018 20:51:12 -0400 (0:00:00.383) 0:04:35.515 *********** >2018-07-13 20:51:12,950 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529472.37-263811743021188/source", "state": "file", "uid": 0} >2018-07-13 20:51:12,968 p=5867 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-07-13 20:51:12,969 p=5867 u=mistral | Friday 13 July 2018 20:51:12 -0400 (0:00:00.641) 0:04:36.156 *********** >2018-07-13 20:51:13,315 p=5867 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-07-13 20:51:13,333 p=5867 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-07-13 20:51:13,333 p=5867 u=mistral | Friday 13 July 2018 20:51:13 -0400 (0:00:00.364) 0:04:36.521 *********** >2018-07-13 20:51:13,670 p=5867 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-07-13 20:51:13,690 p=5867 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-07-13 20:51:13,690 p=5867 u=mistral | Friday 13 July 2018 20:51:13 -0400 (0:00:00.357) 0:04:36.878 *********** >2018-07-13 20:51:14,046 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-07-13 20:51:14,066 p=5867 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-07-13 20:51:14,067 p=5867 u=mistral | Friday 13 July 2018 20:51:14 -0400 (0:00:00.376) 0:04:37.255 *********** >2018-07-13 20:51:14,092 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:51:14,111 p=5867 u=mistral | TASK [container-registry : force systemd to reread configs] ******************** >2018-07-13 20:51:14,111 p=5867 u=mistral | Friday 13 July 2018 20:51:14 -0400 (0:00:00.044) 0:04:37.299 *********** >2018-07-13 20:51:14,522 p=5867 u=mistral | ok: [ceph-0] => {"changed": false, "name": null, "status": {}} >2018-07-13 20:51:14,543 p=5867 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-07-13 20:51:14,543 p=5867 u=mistral | Friday 13 July 2018 20:51:14 -0400 (0:00:00.432) 0:04:37.731 *********** >2018-07-13 20:51:16,302 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "rhel-push-plugin.socket systemd-journald.socket registries.service network.target system.slice docker-storage-setup.service basic.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target paunch-container-shutdown.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "14903", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "docker-cleanup.timer basic.target rhel-push-plugin.socket registries.service", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-07-13 20:51:16,304 p=5867 u=mistral | RUNNING HANDLER [container-registry : restart docker] ************************** >2018-07-13 20:51:16,304 p=5867 u=mistral | Friday 13 July 2018 20:51:16 -0400 (0:00:01.761) 0:04:39.492 *********** >2018-07-13 20:51:19,004 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Fri 2018-07-13 20:51:02 EDT", "ActiveEnterTimestampMonotonic": "452507672", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "network.target docker-storage-setup.service registries.service rhel-push-plugin.socket basic.target systemd-journald.socket system.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-07-13 20:51:01 EDT", "AssertTimestampMonotonic": "451332695", "Before": "multi-user.target shutdown.target paunch-container-shutdown.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-07-13 20:51:01 EDT", "ConditionTimestampMonotonic": "451332695", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "25641", "ExecMainStartTimestamp": "Fri 2018-07-13 20:51:01 EDT", "ExecMainStartTimestampMonotonic": "451333829", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Fri 2018-07-13 20:51:01 EDT] ; stop_time=[n/a] ; pid=25641 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2018-07-13 20:51:01 EDT", "InactiveExitTimestampMonotonic": "451333860", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127792", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "25641", "MemoryAccounting": "no", "MemoryCurrent": "64065536", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "registries.service rhel-push-plugin.socket docker-cleanup.timer basic.target", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "24", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestamp": "Fri 2018-07-13 20:51:02 EDT", "WatchdogTimestampMonotonic": "452507474", "WatchdogUSec": "0"}} >2018-07-13 20:51:19,059 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Fri 2018-07-13 20:51:16 EDT", "ActiveEnterTimestampMonotonic": "455376753", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "systemd-journald.socket rhel-push-plugin.socket docker-storage-setup.service registries.service network.target system.slice basic.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-07-13 20:51:15 EDT", "AssertTimestampMonotonic": "454175829", "Before": "paunch-container-shutdown.service multi-user.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-07-13 20:51:15 EDT", "ConditionTimestampMonotonic": "454175829", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "16756", "ExecMainStartTimestamp": "Fri 2018-07-13 20:51:15 EDT", "ExecMainStartTimestampMonotonic": "454176976", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Fri 2018-07-13 20:51:15 EDT] ; stop_time=[n/a] ; pid=16756 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2018-07-13 20:51:15 EDT", "InactiveExitTimestampMonotonic": "454177040", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "14903", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "16756", "MemoryAccounting": "no", "MemoryCurrent": "62517248", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "basic.target registries.service docker-cleanup.timer rhel-push-plugin.socket", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "16", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestamp": "Fri 2018-07-13 20:51:16 EDT", "WatchdogTimestampMonotonic": "455376702", "WatchdogUSec": "0"}} >2018-07-13 20:51:19,065 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Fri 2018-07-13 20:51:09 EDT", "ActiveEnterTimestampMonotonic": "449517132", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "systemd-journald.socket system.slice network.target basic.target docker-storage-setup.service rhel-push-plugin.socket registries.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-07-13 20:51:08 EDT", "AssertTimestampMonotonic": "448351692", "Before": "shutdown.target multi-user.target paunch-container-shutdown.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-07-13 20:51:08 EDT", "ConditionTimestampMonotonic": "448351692", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "19498", "ExecMainStartTimestamp": "Fri 2018-07-13 20:51:08 EDT", "ExecMainStartTimestampMonotonic": "448352839", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Fri 2018-07-13 20:51:08 EDT] ; stop_time=[n/a] ; pid=19498 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2018-07-13 20:51:08 EDT", "InactiveExitTimestampMonotonic": "448352870", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22966", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "19498", "MemoryAccounting": "no", "MemoryCurrent": "64167936", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "rhel-push-plugin.socket registries.service docker-cleanup.timer basic.target", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "19", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestamp": "Fri 2018-07-13 20:51:09 EDT", "WatchdogTimestampMonotonic": "449516987", "WatchdogUSec": "0"}} >2018-07-13 20:51:19,072 p=5867 u=mistral | PLAY [Overcloud common deploy step tasks 1] ************************************ >2018-07-13 20:51:19,101 p=5867 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-07-13 20:51:19,102 p=5867 u=mistral | Friday 13 July 2018 20:51:19 -0400 (0:00:02.797) 0:04:42.290 *********** >2018-07-13 20:51:19,564 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:51:19,566 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:51:19,626 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:51:19,650 p=5867 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-07-13 20:51:19,650 p=5867 u=mistral | Friday 13 July 2018 20:51:19 -0400 (0:00:00.548) 0:04:42.838 *********** >2018-07-13 20:51:20,441 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "8cc2a8154fe8261f1ad4dbbf7151db6f5d016a04", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "ea4a5c9cd9eca53a460514b61dc3d011", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1631, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529479.76-212415798085525/source", "state": "file", "uid": 0} >2018-07-13 20:51:20,450 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "44355f328588ff032fb9d91a3fdf2a8f427f6ac1", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "d14bfa59823532755440579b4b515901", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1589, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529479.8-270564775695470/source", "state": "file", "uid": 0} >2018-07-13 20:51:20,470 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "466a8f2a86c39f07687a38e5228ba59c61ec5d19", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "a290d9fc287fa24e55411e78c56eb224", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1577, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529479.81-81937675966517/source", "state": "file", "uid": 0} >2018-07-13 20:51:20,497 p=5867 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-07-13 20:51:20,497 p=5867 u=mistral | Friday 13 July 2018 20:51:20 -0400 (0:00:00.847) 0:04:43.685 *********** >2018-07-13 20:51:20,961 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-07-13 20:51:20,989 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-07-13 20:51:20,990 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-07-13 20:51:21,014 p=5867 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-07-13 20:51:21,014 p=5867 u=mistral | Friday 13 July 2018 20:51:21 -0400 (0:00:00.517) 0:04:44.202 *********** >2018-07-13 20:51:21,733 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "152b4a708838fcbafbb9467b8e2fef8ebdc9fe7f", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "5d3a1e62ef835a5d3b437254f032fa7a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 234, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529481.09-201002357551765/source", "state": "file", "uid": 0} >2018-07-13 20:51:21,768 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "91f1c24bac7e5030bb6b85c6913afc377e8cfc00", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "46591f2d3767a0d2e6cd2c6558198932", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2288, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529481.13-69535254512563/source", "state": "file", "uid": 0} >2018-07-13 20:51:21,792 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "173f20c0432d098b484fa87a54b9c4cfa235ffde", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "d5b4744af803766117968a25cf4f0876", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 13304, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529481.11-140418119063187/source", "state": "file", "uid": 0} >2018-07-13 20:51:21,818 p=5867 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-07-13 20:51:21,818 p=5867 u=mistral | Friday 13 July 2018 20:51:21 -0400 (0:00:00.803) 0:04:45.006 *********** >2018-07-13 20:51:22,233 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:51:22,236 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:51:22,283 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:51:22,307 p=5867 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-07-13 20:51:22,307 p=5867 u=mistral | Friday 13 July 2018 20:51:22 -0400 (0:00:00.488) 0:04:45.495 *********** >2018-07-13 20:51:22,694 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-07-13 20:51:22,724 p=5867 u=mistral | ok: [ceph-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-07-13 20:51:22,776 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-07-13 20:51:22,800 p=5867 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-07-13 20:51:22,800 p=5867 u=mistral | Friday 13 July 2018 20:51:22 -0400 (0:00:00.492) 0:04:45.988 *********** >2018-07-13 20:51:23,529 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', 'mode': u'0700'}, 'key': 'nova_api_discover_hosts.sh'}) => {"changed": true, "checksum": "4e350e3d48cba294f2ccab34eb03c1dee23e7f82", "dest": "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh", "gid": 0, "group": "root", "item": {"key": "nova_api_discover_hosts.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}}, "md5sum": "ed5dca102b28b4f992943612dee2dced", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2318, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529482.9-250179372127391/source", "state": "file", "uid": 0} >2018-07-13 20:51:23,572 p=5867 u=mistral | changed: [compute-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": true, "checksum": "03f62b0a94bee17ece72ba1a3fc7577e68d9e6a4", "dest": "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh", "gid": 0, "group": "root", "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "md5sum": "1672c3fb89d576d045d5f3d5b23684c9", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529482.92-95344861734767/source", "state": "file", "uid": 0} >2018-07-13 20:51:24,158 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', 'mode': u'0700'}, 'key': 'create_swift_secret.sh'}) => {"changed": true, "checksum": "e77b96beec241bb84928d298a778521376225c0d", "dest": "/var/lib/docker-config-scripts/create_swift_secret.sh", "gid": 0, "group": "root", "item": {"key": "create_swift_secret.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}}, "md5sum": "9277d70c2fd62961998c5fce0a8aeee2", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1125, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529483.56-55001570886327/source", "state": "file", "uid": 0} >2018-07-13 20:51:24,787 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': 'neutron_ovs_agent_launcher.sh'}) => {"changed": true, "checksum": "03f62b0a94bee17ece72ba1a3fc7577e68d9e6a4", "dest": "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh", "gid": 0, "group": "root", "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "md5sum": "1672c3fb89d576d045d5f3d5b23684c9", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529484.18-13449778704714/source", "state": "file", "uid": 0} >2018-07-13 20:51:25,415 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', 'mode': u'0700'}, 'key': 'set_swift_keymaster_key_id.sh'}) => {"changed": true, "checksum": "9c2474fa6e4a8869674b689206eb1a1658a28fc6", "dest": "/var/lib/docker-config-scripts/set_swift_keymaster_key_id.sh", "gid": 0, "group": "root", "item": {"key": "set_swift_keymaster_key_id.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}}, "md5sum": "054225f8957e4457ef2103ce24d44b04", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1275, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529484.81-97882325199315/source", "state": "file", "uid": 0} >2018-07-13 20:51:26,038 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', 'mode': u'0700'}, 'key': 'docker_puppet_apply.sh'}) => {"changed": true, "checksum": "93afaa6df42c9ead7768b295fa901f83ae1b39ef", "dest": "/var/lib/docker-config-scripts/docker_puppet_apply.sh", "gid": 0, "group": "root", "item": {"key": "docker_puppet_apply.sh", "value": {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}}, "md5sum": "709b2caef95cc7486f9b851414e71133", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 653, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529485.44-123799322703634/source", "state": "file", "uid": 0} >2018-07-13 20:51:26,650 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', 'mode': u'0700'}, 'key': 'nova_api_ensure_default_cell.sh'}) => {"changed": true, "checksum": "0a839197c2fa15204014befb1c771a17aea5bdd1", "dest": "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh", "gid": 0, "group": "root", "item": {"key": "nova_api_ensure_default_cell.sh", "value": {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}}, "md5sum": "12a4a82656ddaae342942097b752d9db", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 442, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529486.06-169994282053932/source", "state": "file", "uid": 0} >2018-07-13 20:51:26,680 p=5867 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-07-13 20:51:26,681 p=5867 u=mistral | Friday 13 July 2018 20:51:26 -0400 (0:00:03.880) 0:04:49.869 *********** >2018-07-13 20:51:26,736 p=5867 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:26,745 p=5867 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:26,747 p=5867 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:26,771 p=5867 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:26,771 p=5867 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:26,772 p=5867 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:26,772 p=5867 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:26,775 p=5867 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:26,785 p=5867 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:26,785 p=5867 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:26,785 p=5867 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:26,790 p=5867 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:26,791 p=5867 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:26,798 p=5867 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:26,801 p=5867 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:26,807 p=5867 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:26,815 p=5867 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:26,824 p=5867 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:26,847 p=5867 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-07-13 20:51:26,847 p=5867 u=mistral | Friday 13 July 2018 20:51:26 -0400 (0:00:00.166) 0:04:50.035 *********** >2018-07-13 20:51:26,929 p=5867 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:27,004 p=5867 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:27,438 p=5867 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:51:27,461 p=5867 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-07-13 20:51:27,461 p=5867 u=mistral | Friday 13 July 2018 20:51:27 -0400 (0:00:00.614) 0:04:50.649 *********** >2018-07-13 20:51:28,174 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "479895147cdb31906634574abb75e134fdb4a451", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "5cf032b3b4b441f5b1365ec866cd076d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1055, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529487.53-149374046930827/source", "state": "file", "uid": 0} >2018-07-13 20:51:28,181 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "90be1fdf4f2f5e8c43bf2648b8f904b7306470c2", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "fb3320ef24503daf1bb1e6f55b06fb8b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 105397, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529487.51-120087175487972/source", "state": "file", "uid": 0} >2018-07-13 20:51:28,238 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "d6d962e949634724dc81b6d5c797706d0e22c045", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "29957f226630b9ba138228736f4964b9", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 11960, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529487.56-118360351066561/source", "state": "file", "uid": 0} >2018-07-13 20:51:28,261 p=5867 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-07-13 20:51:28,261 p=5867 u=mistral | Friday 13 July 2018 20:51:28 -0400 (0:00:00.799) 0:04:51.449 *********** >2018-07-13 20:51:28,978 p=5867 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529488.34-219095157445140/source", "state": "file", "uid": 0} >2018-07-13 20:51:29,009 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'cinder_volume_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-07-13.3' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_image_tag': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_data_ownership': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3', 'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], 'user': u'root', 'volumes': [u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'redis_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-07-13.3' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_bootstrap': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=QzjIyn8tCyQGDZJHCryn29WdI', u'DB_ROOT_PASSWORD=tSWI51tcsi'], 'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'haproxy_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-07-13.3' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-07-13.3' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'cinder_backup_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-07-13.3' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_bootstrap': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=e8y8AdnQFfiHK1gbj69r'], 'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], 'net': u'host', 'privileged': False}, 'memcached': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}}, 'key': u'step_1'}) => {"changed": true, "checksum": "a0f1c4185144c02715bf4fe141f914e627da2a18", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-07-13.3' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-07-13.3", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-07-13.3' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-07-13.3", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-07-13.3' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-07-13.3", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=QzjIyn8tCyQGDZJHCryn29WdI", "DB_ROOT_PASSWORD=tSWI51tcsi"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=e8y8AdnQFfiHK1gbj69r"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-07-13.3", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-07-13.3' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-07-13.3", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-07-13.3' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-07-13.3", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}}, "md5sum": "ea59baadf586e73c2143bf41695e98cf", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6913, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529488.35-230473816887058/source", "state": "file", "uid": 0} >2018-07-13 20:51:29,033 p=5867 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529488.37-250368733516575/source", "state": "file", "uid": 0} >2018-07-13 20:51:29,596 p=5867 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_3'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529488.98-47193268012260/source", "state": "file", "uid": 0} >2018-07-13 20:51:29,683 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'nova_placement': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'restart': u'always'}, 'swift_rsync_fix': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'sed -i "/pid file/d" /var/lib/kolla/config_files/src/etc/rsyncd.conf'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw'], 'net': u'host', 'detach': False}, 'nova_db_sync': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'heat_engine_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-07-13.3', 'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_copy_rings': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3', 'detach': False, 'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, 'nova_api_ensure_default_cell': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3', 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], 'net': u'host', 'detach': False}, 'keystone_cron': {'start_order': 4, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'panko_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-07-13.3', 'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_backup_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'nova_api_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'keystone_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-07-13.3', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'detach': False, 'privileged': False}, 'ceilometer_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'start_order': 0, 'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'user': u'root'}, 'keystone': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-07-13.3', 'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'cinder_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-07-13.3', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'nova_api_map_cell0': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'glance_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-07-13.3', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'detach': False, 'privileged': False}, 'neutron_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3', 'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'sahara_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-07-13.3', 'command': u"/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'detach': False, 'privileged': False}, 'keystone_bootstrap': {'action': u'exec', 'start_order': 3, 'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'EbibF45YI3iewLaM7KZgYtdM2'], 'user': u'root'}, 'horizon': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_setup_srv': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-07-13.3', 'command': [u'chown', u'-R', u'swift:', u'/srv/node'], 'user': u'root', 'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) => {"changed": true, "checksum": "79708ba39b319cf878ef35915c498625a73410b8", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-07-13.3", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-07-13.3", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-07-13.3", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-07-13.3", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-07-13.3", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-07-13.3", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-07-13.3", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-07-13.3", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "EbibF45YI3iewLaM7KZgYtdM2"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-07-13.3", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-07-13.3", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-07-13.3", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-07-13.3", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_rsync_fix": {"command": ["/bin/bash", "-c", "sed -i \"/pid file/d\" /var/lib/kolla/config_files/src/etc/rsyncd.conf"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-07-13.3", "net": "host", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-07-13.3", "user": "root", "volumes": ["/srv/node:/srv/node"]}}}, "md5sum": "6d1132161e39cf9b865ca15f5f437961", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 22165, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529489.01-56747795275381/source", "state": "file", "uid": 0} >2018-07-13 20:51:29,707 p=5867 u=mistral | changed: [compute-0] => (item={'value': {'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'nova_libvirt': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-07-13.3', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/var/log/containers/libvirt:/var/log/libvirt', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro', u'/var/lib/vhost_sockets:/var/lib/vhost_sockets', u'/sys/fs/selinux:/sys/fs/selinux'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_virtlogd': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-07-13.3', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/etc/libvirt/qemu:/etc/libvirt/qemu:ro', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu'], 'net': u'host', 'privileged': True, 'restart': u'always'}}, 'key': u'step_3'}) => {"changed": true, "checksum": "19403ab558766e5232171c3c2e5e647692a902d4", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-07-13.3", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-07-13.3", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-07-13.3", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}}, "md5sum": "153cd84d5b21f86abfab5ef72e339a0d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 5101, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529489.04-192692544477769/source", "state": "file", "uid": 0} >2018-07-13 20:51:30,237 p=5867 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529489.6-32941712928854/source", "state": "file", "uid": 0} >2018-07-13 20:51:30,363 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'gnocchi_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], 'user': u'root', 'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, 'mysql_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1531528515'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], 'net': u'host', 'detach': False}, 'gnocchi_init_lib': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], 'user': u'root', 'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, 'cinder_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'user': u'root'}, 'create_dnsmasq_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-07-13.3', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'panko_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], 'user': u'root', 'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, 'redis_init_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-07-13.3', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1531528515'], 'config_volume': u'redis_init_bundle', 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_scheduler_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'user': u'root'}, 'glance_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], 'privileged': False, 'volumes': [u'/var/log/containers/glance:/var/log/glance'], 'user': u'root'}, 'clustercheck': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'restart': u'always'}, 'haproxy_init_bundle': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-07-13.3', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1531528515'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False, 'privileged': True}, 'neutron_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], 'privileged': False, 'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], 'user': u'root'}, 'mysql_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3', 'config_volume': u'mysql', 'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'rabbitmq_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-07-13.3', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1531528515'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], 'net': u'host', 'detach': False}, 'nova_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], 'user': u'root'}, 'haproxy_restart_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-07-13.3', 'config_volume': u'haproxy', 'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'create_keepalived_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-07-13.3', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'rabbitmq_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-07-13.3', 'config_volume': u'rabbitmq', 'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'horizon_fix_perms': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'user': u'root', 'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'aodh_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], 'user': u'root', 'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, 'nova_metadata_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova'], 'user': u'root'}, 'redis_restart_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-07-13.3', 'config_volume': u'redis', 'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'heat_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], 'user': u'root', 'volumes': [u'/var/log/containers/heat:/var/log/heat']}, 'nova_placement_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'start_order': 1, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], 'user': u'root'}, 'keystone_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], 'start_order': 1, 'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], 'user': u'root'}}, 'key': u'step_2'}) => {"changed": true, "checksum": "a6d8f03c5798ef3888648072686f136ef61037ec", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-07-13.3", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-07-13.3", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-07-13.3", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-07-13.3", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-07-13.3", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-07-13.3", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-07-13.3", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-07-13.3", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1531528515"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-07-13.3", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-07-13.3", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-07-13.3", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-07-13.3", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-07-13.3", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1531528515"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-07-13.3", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-07-13.3", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1531528515"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-07-13.3", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-07-13.3", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1531528515"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-07-13.3", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-07-13.3", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}}, "md5sum": "2d16baba3327434070304638f95e2096", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 17318, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529489.68-109555845790891/source", "state": "file", "uid": 0} >2018-07-13 20:51:30,375 p=5867 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529489.71-55092834468012/source", "state": "file", "uid": 0} >2018-07-13 20:51:30,860 p=5867 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529490.25-189781830224207/source", "state": "file", "uid": 0} >2018-07-13 20:51:31,016 p=5867 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529490.37-188671645145296/source", "state": "file", "uid": 0} >2018-07-13 20:51:31,031 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'cinder_volume_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-07-13.3', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1531528515'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_volume_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-07-13.3', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_statsd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-07-13.3', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_backup', u'if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo "openstack-cinder-backup restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_metricd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_discover_hosts': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1531528515'], 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], 'net': u'host', 'detach': False}, 'ceilometer_gnocchi_upgrade': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3', 'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'detach': False, 'privileged': False}, 'gnocchi_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-07-13.3', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1531528515'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}}, 'key': u'step_5'}) => {"changed": true, "checksum": "338384703e3e3467edae4ae5c1f8485800b6e5b0", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3", "net": "host", "privileged": false, "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1531528515"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-07-13.3", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-07-13.3", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1531528515"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-07-13.3", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-07-13.3", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1531528515"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}}}, "md5sum": "eac81399db722f382104c8011d3e6954", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 10552, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529490.36-71071017997160/source", "state": "file", "uid": 0} >2018-07-13 20:51:31,472 p=5867 u=mistral | changed: [ceph-0] => (item={'value': {'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "ad177b3b3e81d2c4bc8ab067a10cc6dcbe9e0aeb", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}}, "md5sum": "c8bb5971cdb60e0cc95cbc0139b95d77", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 973, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529490.87-137516034157588/source", "state": "file", "uid": 0} >2018-07-13 20:51:31,681 p=5867 u=mistral | changed: [compute-0] => (item={'value': {'ceilometer_agent_compute': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/run/libvirt:/var/run/libvirt:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_libvirt_init_secret': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-07-13.3', 'command': [u'/bin/bash', u'-c', u"/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '60442a52-86fa-11e8-982e-525400401c2c' --base64 'AQCOP0lbAAAAABAAlwoiCh/bQzLpXLvjP1FIBQ=='"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt'], 'detach': False, 'privileged': False}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-07-13.3', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_migration_target': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ssh/:/host-ssh/:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_compute': {'ipc': u'host', 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/dev:/dev', u'/lib/modules:/lib/modules:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/libvirt:/var/lib/libvirt', u'/sys/class/net:/sys/class/net', u'/sys/bus/pci:/sys/bus/pci'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "7bb8c4660d590bbac0548b927a7fc27d171f8037", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-07-13.3", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-07-13.3", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '60442a52-86fa-11e8-982e-525400401c2c' --base64 'AQCOP0lbAAAAABAAlwoiCh/bQzLpXLvjP1FIBQ=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-07-13.3", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-07-13.3", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}}, "md5sum": "55f6bf659903030f19176689824d3d93", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6779, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529491.03-175761088430485/source", "state": "file", "uid": 0} >2018-07-13 20:51:31,756 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'swift_container_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_evaluator': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'cinder_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_proxy': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'restart': u'always'}, 'neutron_dhcp': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-07-13.3', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_metadata_agent': {'start_order': 10, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-07-13.3', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'ceilometer_agent_central': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'keystone_refresh': {'action': u'exec', 'start_order': 1, 'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], 'user': u'root'}, 'swift_account_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_notifier': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_consoleauth': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'gnocchi_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_account_reaper': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'ceilometer_agent_notification': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_vnc_proxy': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_rsync': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'aodh_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_metadata': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_l3_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-07-13.3', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_conductor': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'heat_api_cfn': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_api': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-07-13.3', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_container_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_expirer': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'heat_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'panko_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_listener': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_api': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'glance_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-07-13.3', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "eb61f8cd650edd359ce1c3a5e1e202ddeae54c2b", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-07-13.3", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-07-13.3", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-07-13.3", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-07-13.3", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-07-13.3", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-07-13.3", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-07-13.3", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-07-13.3", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-07-13.3", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-07-13.3", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-07-13.3", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-07-13.3", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-07-13.3", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-07-13.3", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-07-13.3", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-07-13.3", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-07-13.3", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-07-13.3", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-07-13.3", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}}, "md5sum": "3248524530e5be760cfc974e88da16c2", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 48375, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529491.04-115984049003820/source", "state": "file", "uid": 0} >2018-07-13 20:51:32,084 p=5867 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529491.48-190347395853874/source", "state": "file", "uid": 0} >2018-07-13 20:51:32,309 p=5867 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529491.69-83136398685003/source", "state": "file", "uid": 0} >2018-07-13 20:51:32,382 p=5867 u=mistral | changed: [controller-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529491.73-153308936204308/source", "state": "file", "uid": 0} >2018-07-13 20:51:32,563 p=5867 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-07-13 20:51:32,563 p=5867 u=mistral | Friday 13 July 2018 20:51:32 -0400 (0:00:04.301) 0:04:55.751 *********** >2018-07-13 20:51:32,944 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:51:32,947 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:51:32,987 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-07-13 20:51:33,013 p=5867 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-07-13 20:51:33,014 p=5867 u=mistral | Friday 13 July 2018 20:51:33 -0400 (0:00:00.450) 0:04:56.202 *********** >2018-07-13 20:51:33,701 p=5867 u=mistral | changed: [ceph-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529493.1-151836012128457/source", "state": "file", "uid": 0} >2018-07-13 20:51:33,805 p=5867 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529493.15-46042315857870/source", "state": "file", "uid": 0} >2018-07-13 20:51:33,913 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529493.25-151834206744078/source", "state": "file", "uid": 0} >2018-07-13 20:51:34,459 p=5867 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": true, "checksum": "40f9ceb4dd2fc8e9c51bf5152a0fa8e1d16d9137", "dest": "/var/lib/kolla/config_files/iscsid.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "md5sum": "9cd3c2dc0153b127d70141dadfabd12c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 175, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529493.81-250193811323804/source", "state": "file", "uid": 0} >2018-07-13 20:51:34,554 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/keystone.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/keystone.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/keystone.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529493.92-87717177885567/source", "state": "file", "uid": 0} >2018-07-13 20:51:35,097 p=5867 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/libvirtd', 'permissions': [{'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_libvirt.json'}) => {"changed": true, "checksum": "b50cbe1f8b020aa49249248b57310f45005813b3", "dest": "/var/lib/kolla/config_files/nova_libvirt.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_libvirt.json", "value": {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "8356787bbcfcb5674a0bf2570719654a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 512, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529494.47-159862297000063/source", "state": "file", "uid": 0} >2018-07-13 20:51:35,194 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/lib/cinder', 'recurse': True}, {'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_backup.json'}) => {"changed": true, "checksum": "0e697e31bdc439b99552bac9ffe0bab07f2af4a4", "dest": "/var/lib/kolla/config_files/cinder_backup.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_backup.json", "value": {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "8e107eb8f6989be8375a0ff2dd5b4d57", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529494.56-185423627405753/source", "state": "file", "uid": 0} >2018-07-13 20:51:35,712 p=5867 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ssh/', 'owner': u'root', 'perm': u'0600', 'source': u'/host-ssh/ssh_host_*_key'}], 'command': u'/usr/sbin/sshd -D -p 2022'}, 'key': '/var/lib/kolla/config_files/nova-migration-target.json'}) => {"changed": true, "checksum": "6a0a936a324363cd605e22c2327c17deb6dfbec2", "dest": "/var/lib/kolla/config_files/nova-migration-target.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova-migration-target.json", "value": {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}}, "md5sum": "161558d57b182ca70c6f9bbd7fcbda8a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 258, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529495.1-113171092109616/source", "state": "file", "uid": 0} >2018-07-13 20:51:35,845 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529495.2-61396519883515/source", "state": "file", "uid": 0} >2018-07-13 20:51:36,340 p=5867 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf'}, 'key': '/var/lib/kolla/config_files/nova_virtlogd.json'}) => {"changed": true, "checksum": "8bbfe195e54ddfe481aaad9744174f7344d49681", "dest": "/var/lib/kolla/config_files/nova_virtlogd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_virtlogd.json", "value": {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "786b962e2df778e3ce02b185ef93deac", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 193, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529495.72-124966199638517/source", "state": "file", "uid": 0} >2018-07-13 20:51:36,489 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_auditor.json'}) => {"changed": true, "checksum": "413730fbf3f7935085cfda60cbc1535d8bce0caf", "dest": "/var/lib/kolla/config_files/swift_account_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_auditor.json", "value": {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "dfccd947a56ceb6fa2b71c400281a365", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 200, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529495.85-211058948308256/source", "state": "file", "uid": 0} >2018-07-13 20:51:36,999 p=5867 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": true, "checksum": "bd1c4f0459f65e7f67a969a89c74a8b8cdcfd9f8", "dest": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "3599cf6b814b7c628c2887996ca46138", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 261, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529496.35-130230542181301/source", "state": "file", "uid": 0} >2018-07-13 20:51:37,141 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_replicator.json'}) => {"changed": true, "checksum": "2bf5ca66cb377c9fa3e6880f8b078d1312470cde", "dest": "/var/lib/kolla/config_files/swift_account_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_replicator.json", "value": {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d4a857b7e18f40f1cc1e6fd265c89770", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 203, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529496.5-15034644418700/source", "state": "file", "uid": 0} >2018-07-13 20:51:37,666 p=5867 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/nova-compute ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/var/lib/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_compute.json'}) => {"changed": true, "checksum": "bb1c3bcd199b74791ea32746c08f4925a3b585a2", "dest": "/var/lib/kolla/config_files/nova_compute.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_compute.json", "value": {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/var/lib/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "70b809037933259f45bb1585e9e6a4cc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 643, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529497.01-73085697779391/source", "state": "file", "uid": 0} >2018-07-13 20:51:37,802 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-notifier', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_notifier.json'}) => {"changed": true, "checksum": "e01d19d7f7cff24dfcc0d132b7d8ceabba199142", "dest": "/var/lib/kolla/config_files/aodh_notifier.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_notifier.json", "value": {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "5d4a748030a9a7476ccbd8902fb654fc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 244, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529497.15-264016812358170/source", "state": "file", "uid": 0} >2018-07-13 20:51:38,322 p=5867 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_compute.json'}) => {"changed": true, "checksum": "4b3e97fcd87fd70b35934d1ef908747f302a4d11", "dest": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d91832a36a0ad3616a4e78c1af7d0db5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 237, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529497.67-216166677254243/source", "state": "file", "uid": 0} >2018-07-13 20:51:38,450 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-scheduler ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_scheduler.json'}) => {"changed": true, "checksum": "23416bae23a2c08d2c534f76d19f8c4bad40ee92", "dest": "/var/lib/kolla/config_files/nova_scheduler.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_scheduler.json", "value": {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "d00e4198d95dede3f0b6ac351d57a982", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529497.81-133651071952543/source", "state": "file", "uid": 0} >2018-07-13 20:51:39,041 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cron.json'}) => {"changed": true, "checksum": "a13a92b47f931e2e89d7e4bf5057a4307ab9cd45", "dest": "/var/lib/kolla/config_files/heat_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "e671c4783cc86fb2ad300fcd11b2f99b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 240, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529498.46-61973003855815/source", "state": "file", "uid": 0} >2018-07-13 20:51:39,652 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/certs/neutron.crt'}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': '/var/lib/kolla/config_files/neutron_dhcp.json'}) => {"changed": true, "checksum": "da289f102f641cdd0a02df41c443d7d8387741a5", "dest": "/var/lib/kolla/config_files/neutron_dhcp.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_dhcp.json", "value": {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}}, "md5sum": "c5975567082648a9da814c433c49f2d6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 875, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529499.05-50082013154958/source", "state": "file", "uid": 0} >2018-07-13 20:51:40,253 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', 'permissions': [{'owner': u'haproxy:haproxy', 'path': u'/var/lib/haproxy', 'recurse': True}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/certs/haproxy/*', 'optional': True, 'perm': u'0600'}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/private/haproxy/*', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/haproxy.json'}) => {"changed": true, "checksum": "0801385cb9292b3b6eb8440166435242bd90e288", "dest": "/var/lib/kolla/config_files/haproxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/haproxy.json", "value": {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}}, "md5sum": "a2742f7abd50bb0af0a4ba55b2f1f4ff", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 648, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529499.66-111243627796127/source", "state": "file", "uid": 0} >2018-07-13 20:51:40,857 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api_cron.json'}) => {"changed": true, "checksum": "c1a1552a71f4daefebff5234f9d8ba71f4c64d76", "dest": "/var/lib/kolla/config_files/nova_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "6b8ef057a2e5539eacd9f29fc4b94036", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 240, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529500.26-75258607562103/source", "state": "file", "uid": 0} >2018-07-13 20:51:41,464 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_db_sync.json'}) => {"changed": true, "checksum": "a6d2eb62af2f11437c704d13adf72d498324ce2a", "dest": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "value": {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "d586f0c2ff043bece10efff986d635a3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 531, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529500.87-259984093218556/source", "state": "file", "uid": 0} >2018-07-13 20:51:42,074 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_reaper.json'}) => {"changed": true, "checksum": "b061cf7478060add5d079aafaeae81b445251a8f", "dest": "/var/lib/kolla/config_files/swift_account_reaper.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_reaper.json", "value": {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "0f3bbe74ca95c8cca321ee32e2aff7d1", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529501.47-232593701796462/source", "state": "file", "uid": 0} >2018-07-13 20:51:42,682 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-engine.json'}) => {"changed": true, "checksum": "b7397fff831b47db0b6111663d816a64a389cb25", "dest": "/var/lib/kolla/config_files/sahara-engine.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/sahara-engine.json", "value": {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "md5sum": "ac2c7a84fc46a1f1d128201ce5b67c2d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 360, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529502.08-214999132071333/source", "state": "file", "uid": 0} >2018-07-13 20:51:43,290 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'redis:redis', 'path': u'/var/run/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/lib/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/log/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/certs/redis.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/private/redis.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/redis.json'}) => {"changed": true, "checksum": "66d6d6bd51aaa0c100cdfc7688267a4342c7859f", "dest": "/var/lib/kolla/config_files/redis.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/redis.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}}, "md5sum": "ceafff1d742633f8759bdb1af0e3ebd4", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 843, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529502.69-163560098540163/source", "state": "file", "uid": 0} >2018-07-13 20:51:43,895 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_vnc_proxy.json'}) => {"changed": true, "checksum": "b64555136537c36af22340fb15f21f0e01ac3495", "dest": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "value": {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "557a4e9522f54cfbd6456516e67f4971", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 271, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529503.3-9238052520655/source", "state": "file", "uid": 0} >2018-07-13 20:51:44,496 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', 'permissions': [{'owner': u'glance:glance', 'path': u'/var/lib/glance', 'recurse': True}, {'owner': u'glance:glance', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/glance_api.json'}) => {"changed": true, "checksum": "2a93405ac579e31c6e5732983f3d7dd8bed55b33", "dest": "/var/lib/kolla/config_files/glance_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/glance_api.json", "value": {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "30c5fe40dffc304e7edeab4019e96e92", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 556, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529503.9-106007977135921/source", "state": "file", "uid": 0} >2018-07-13 20:51:45,110 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_auditor.json'}) => {"changed": true, "checksum": "739f6562d3ea24561c6d8bcf37041a9eac928257", "dest": "/var/lib/kolla/config_files/swift_container_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_auditor.json", "value": {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "b63816c7c08aef58249d13b65b387da6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 204, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529504.51-54552285255013/source", "state": "file", "uid": 0} >2018-07-13 20:51:45,707 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-panko/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', 'permissions': [{'owner': u'root:ceilometer', 'path': u'/etc/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) => {"changed": true, "checksum": "98adef088b2ae2648ac88b812890957ec54eff13", "dest": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "value": {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}}, "md5sum": "4a38c9578181c292891f5f7bdb9f791b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 428, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529505.12-216901895288835/source", "state": "file", "uid": 0} >2018-07-13 20:51:46,306 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_expirer.json'}) => {"changed": true, "checksum": "ebbb7ee6895cea2b9278f33e888881d3d3f1a68a", "dest": "/var/lib/kolla/config_files/swift_object_expirer.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_expirer.json", "value": {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "e4bf891d8ffc9a015be201a6ef0d5abc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529505.72-83888291656360/source", "state": "file", "uid": 0} >2018-07-13 20:51:46,920 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_central.json'}) => {"changed": true, "checksum": "53d52f7d52f0fb3da33de2c20414eb3248593fdd", "dest": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "2863f917d7ada51e9570fb53bb363eed", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 237, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529506.31-278863049675667/source", "state": "file", "uid": 0} >2018-07-13 20:51:47,534 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api.json'}) => {"changed": true, "checksum": "454582321236a137f78205f328bae190c02f06b0", "dest": "/var/lib/kolla/config_files/heat_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "c04ac0476ee6639fadf252b0e9d9649b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529506.93-3464251291273/source", "state": "file", "uid": 0} >2018-07-13 20:51:48,151 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': '/var/lib/kolla/config_files/swift_rsync.json'}) => {"changed": true, "checksum": "44a8f1a58092190d553d3f589cab9ae566f8dc81", "dest": "/var/lib/kolla/config_files/swift_rsync.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_rsync.json", "value": {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "886febadf691905adf0c129f3aa0197a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 200, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529507.54-138154724715969/source", "state": "file", "uid": 0} >2018-07-13 20:51:48,788 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_server.json'}) => {"changed": true, "checksum": "279b64a7d6914d2a03c86c703f53e3d71b1daef1", "dest": "/var/lib/kolla/config_files/swift_account_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_server.json", "value": {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "b41d67c146c800142c5405fe5a0b332e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529508.16-248021208884359/source", "state": "file", "uid": 0} >2018-07-13 20:51:49,405 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api_cron.json'}) => {"changed": true, "checksum": "06055a69fec2bc513b4c86ceb654a5fc29bd0866", "dest": "/var/lib/kolla/config_files/cinder_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "801aba1299d99bfd7e63f66ca7a4ba40", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529508.8-234921542704348/source", "state": "file", "uid": 0} >2018-07-13 20:51:50,006 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_proxy.json'}) => {"changed": true, "checksum": "a0874b803c5238a4eeb12b1265d5d1db93c0d3d4", "dest": "/var/lib/kolla/config_files/swift_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_proxy.json", "value": {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "a38e4e3ae519b3b0824e19184e521b36", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 195, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529509.41-59424230055109/source", "state": "file", "uid": 0} >2018-07-13 20:51:50,605 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_updater.json'}) => {"changed": true, "checksum": "8dbfc3669a6d79fb30702be502ced7501500480a", "dest": "/var/lib/kolla/config_files/swift_container_updater.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_updater.json", "value": {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "a697319d04392dc572dff6236144571f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 204, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529510.01-25914780807280/source", "state": "file", "uid": 0} >2018-07-13 20:51:51,231 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/xinetd -dontfork'}, 'key': '/var/lib/kolla/config_files/clustercheck.json'}) => {"changed": true, "checksum": "3c87335a28b992f90769aea9ea62fb610f8236f1", "dest": "/var/lib/kolla/config_files/clustercheck.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/clustercheck.json", "value": {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d74434e7b8bcaca0b227152346c13db8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 165, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529510.61-231041949103430/source", "state": "file", "uid": 0} >2018-07-13 20:51:51,833 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'mysql:mysql', 'path': u'/var/log/mysql', 'recurse': True}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/certs/mysql.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/private/mysql.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/mysql.json'}) => {"changed": true, "checksum": "b52f0d28ed1ac134c64994c08b3f2378e8dff494", "dest": "/var/lib/kolla/config_files/mysql.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/mysql.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}}, "md5sum": "4d15ed291dbe96e88b9a128b0e5c99e9", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 687, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529511.24-12833695518231/source", "state": "file", "uid": 0} >2018-07-13 20:51:52,436 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_placement.json'}) => {"changed": true, "checksum": "d061b71e9106733354c297cbb7b327a22e476de5", "dest": "/var/lib/kolla/config_files/nova_placement.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_placement.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "941db485b7079f2f0e008e1bdff8e45f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529511.84-50199675678991/source", "state": "file", "uid": 0} >2018-07-13 20:51:53,038 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-api.json'}) => {"changed": true, "checksum": "fd070eb1bdc97442fddc24f503fe5e3251b89e28", "dest": "/var/lib/kolla/config_files/sahara-api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/sahara-api.json", "value": {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "md5sum": "bd52668d37c227cc00c418bbe889ab90", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 357, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529512.44-29030995248903/source", "state": "file", "uid": 0} >2018-07-13 20:51:53,634 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_api.json'}) => {"changed": true, "checksum": "f4177197cb07127689ae10a60020efa3a5e0d457", "dest": "/var/lib/kolla/config_files/aodh_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "582326e52a94260e71a4a19dc4d75191", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529513.04-122209266726638/source", "state": "file", "uid": 0} >2018-07-13 20:51:54,249 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'keystone:keystone', 'path': u'/var/log/keystone', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/keystone_cron.json'}) => {"changed": true, "checksum": "815ba71e0584cb12e7d40f794603c6bfb1800626", "dest": "/var/lib/kolla/config_files/keystone_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/keystone_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}}, "md5sum": "b3b3bbd6499e09c424665311a5e66136", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 252, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529513.64-82036712740427/source", "state": "file", "uid": 0} >2018-07-13 20:51:54,864 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529514.26-263199689630700/source", "state": "file", "uid": 0} >2018-07-13 20:51:55,492 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_replicator.json'}) => {"changed": true, "checksum": "659d25615392d81b2f6bc001067232495de4d6ac", "dest": "/var/lib/kolla/config_files/swift_object_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_replicator.json", "value": {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "cdea8a372a87263d5fc44b482867a705", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 201, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529514.87-259865023697192/source", "state": "file", "uid": 0} >2018-07-13 20:51:56,103 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-conductor ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_conductor.json'}) => {"changed": true, "checksum": "01a54792c74d0ebd057e8d0f44e6e8e619283e62", "dest": "/var/lib/kolla/config_files/nova_conductor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_conductor.json", "value": {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "ccbba0ad7a926ceca2bf858b8a9cc376", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529515.5-241020534955771/source", "state": "file", "uid": 0} >2018-07-13 20:51:56,697 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cfn.json'}) => {"changed": true, "checksum": "454582321236a137f78205f328bae190c02f06b0", "dest": "/var/lib/kolla/config_files/heat_api_cfn.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api_cfn.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "c04ac0476ee6639fadf252b0e9d9649b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529516.11-242955994758516/source", "state": "file", "uid": 0} >2018-07-13 20:51:57,280 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-api-metadata ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_metadata.json'}) => {"changed": true, "checksum": "edb529183cc509ea82818edf4d88e3650b5ffc57", "dest": "/var/lib/kolla/config_files/nova_metadata.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_metadata.json", "value": {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "45129bd8b5b9aef067edb558a9fb2c68", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 249, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529516.7-197669494072770/source", "state": "file", "uid": 0} >2018-07-13 20:51:57,880 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": true, "checksum": "bd1c4f0459f65e7f67a969a89c74a8b8cdcfd9f8", "dest": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "3599cf6b814b7c628c2887996ca46138", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 261, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529517.29-168678539452508/source", "state": "file", "uid": 0} >2018-07-13 20:51:58,476 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'rabbitmq:rabbitmq', 'path': u'/var/lib/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/var/log/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/certs/rabbitmq.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/private/rabbitmq.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/rabbitmq.json'}) => {"changed": true, "checksum": "205ddacf194881a04c54779e3049b3c59ef6c4af", "dest": "/var/lib/kolla/config_files/rabbitmq.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/rabbitmq.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}}, "md5sum": "1097dade2a2355fd51207668004d093d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 792, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529517.88-80583191879720/source", "state": "file", "uid": 0} >2018-07-13 20:51:59,068 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-consoleauth ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_consoleauth.json'}) => {"changed": true, "checksum": "a960878859377dfae6334d9b7eaa9f554ab31798", "dest": "/var/lib/kolla/config_files/nova_consoleauth.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_consoleauth.json", "value": {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "2a66fc646aae3e5913e0598ccef3881f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 248, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529518.49-280849435058151/source", "state": "file", "uid": 0} >2018-07-13 20:51:59,653 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_updater.json'}) => {"changed": true, "checksum": "4f7a34f38afe301f885e25eb10225c461ab1d0b1", "dest": "/var/lib/kolla/config_files/swift_object_updater.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_updater.json", "value": {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "71a7e788486d505cfec645da0ac337cd", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529519.07-89723538718360/source", "state": "file", "uid": 0} >2018-07-13 20:52:00,231 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_api.json'}) => {"changed": true, "checksum": "5a73d3b7ef652341120c9298683d3a26f3fb668b", "dest": "/var/lib/kolla/config_files/neutron_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_api.json", "value": {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "c48346aa3f8c096826ebab378db9dfb9", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 549, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529519.66-112033071140291/source", "state": "file", "uid": 0} >2018-07-13 20:52:00,807 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_scheduler.json'}) => {"changed": true, "checksum": "9ec49193a63036ecf32a1479eabdac05dcab06e0", "dest": "/var/lib/kolla/config_files/cinder_scheduler.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_scheduler.json", "value": {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "93e9da0d08550be0ed30576cefdfbfbb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 340, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529520.24-217736087432390/source", "state": "file", "uid": 0} >2018-07-13 20:52:01,405 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-metricd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_metricd.json'}) => {"changed": true, "checksum": "c8763a8c16702042afe553b54212340d800e1509", "dest": "/var/lib/kolla/config_files/gnocchi_metricd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_metricd.json", "value": {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "db9bd25aa2fcd2845d442869e986e7d8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 471, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529520.82-76455922045802/source", "state": "file", "uid": 0} >2018-07-13 20:52:01,984 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_metadata_agent.json'}) => {"changed": true, "checksum": "fe01b9d48d08f239bbf9acf7e2a1492397180c8e", "dest": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "value": {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "md5sum": "a26f6acfc823d6e2e5b34367b859c8fa", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 617, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529521.41-233929888351836/source", "state": "file", "uid": 0} >2018-07-13 20:52:02,571 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_replicator.json'}) => {"changed": true, "checksum": "a418eddca731078cfd8fe2fda7ee64d9ffaf7dda", "dest": "/var/lib/kolla/config_files/swift_container_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_replicator.json", "value": {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "930bbe0f8c13b55f664fb3a89dfa1613", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 207, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529521.99-64345522321653/source", "state": "file", "uid": 0} >2018-07-13 20:52:03,148 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_engine.json'}) => {"changed": true, "checksum": "fe3989178a2ea434bae6dfd64b04423e3ea005bc", "dest": "/var/lib/kolla/config_files/heat_engine.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_engine.json", "value": {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "aee05ebc54399dde3dfc3577c3431a92", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 322, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529522.58-234252823883430/source", "state": "file", "uid": 0} >2018-07-13 20:52:03,746 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api.json'}) => {"changed": true, "checksum": "d061b71e9106733354c297cbb7b327a22e476de5", "dest": "/var/lib/kolla/config_files/nova_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "941db485b7079f2f0e008e1bdff8e45f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529523.16-235812282964093/source", "state": "file", "uid": 0} >2018-07-13 20:52:04,325 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', 'permissions': [{'owner': u'swift:swift', 'path': u'/var/cache/swift', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/swift_object_server.json'}) => {"changed": true, "checksum": "460cdcfbcfac45a30b03df89ac84d2f34db64d72", "dest": "/var/lib/kolla/config_files/swift_object_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_server.json", "value": {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}}, "md5sum": "b00c233fd2cd32c68e429e42918b8245", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 285, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529523.75-12049862318988/source", "state": "file", "uid": 0} >2018-07-13 20:52:04,918 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'stunnel /etc/stunnel/stunnel.conf'}, 'key': '/var/lib/kolla/config_files/redis_tls_proxy.json'}) => {"changed": true, "checksum": "80800f9f267aaf3497499af70b7945e3b6ae771b", "dest": "/var/lib/kolla/config_files/redis_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/redis_tls_proxy.json", "value": {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "c45d2764863cc585b994d432412ff9e8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 172, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529524.33-42497067810747/source", "state": "file", "uid": 0} >2018-07-13 20:52:05,509 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_api.json'}) => {"changed": true, "checksum": "39f33531116fbcba7a5d9c1cbbc32f4af5e6b981", "dest": "/var/lib/kolla/config_files/gnocchi_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "5e924ffe736d942bf904a791bf5b5af2", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 475, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529524.93-271051888086302/source", "state": "file", "uid": 0} >2018-07-13 20:52:06,081 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api.json'}) => {"changed": true, "checksum": "7f36445e4c6eb403ce919ca3adee771d4cb3bcce", "dest": "/var/lib/kolla/config_files/cinder_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "bb3e2e5741eb3e5b6c53da835e66d00d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 256, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529525.52-114913496986387/source", "state": "file", "uid": 0} >2018-07-13 20:52:06,680 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_volume.json'}) => {"changed": true, "checksum": "e800a0e1c86f8fa7a41efbf24ce38f48a458ba51", "dest": "/var/lib/kolla/config_files/cinder_volume.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_volume.json", "value": {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "a85ec43ba623807ac022c04663fa68f5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 579, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529526.09-100250007708237/source", "state": "file", "uid": 0} >2018-07-13 20:52:07,255 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'panko:panko', 'path': u'/var/log/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/panko_api.json'}) => {"changed": true, "checksum": "2db8f01174b9c2aa3a180add472b54891aed5cd6", "dest": "/var/lib/kolla/config_files/panko_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/panko_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}}, "md5sum": "7d9530934c938a4c96f71797957f7ca8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 253, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529526.69-267585101924762/source", "state": "file", "uid": 0} >2018-07-13 20:52:07,845 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_auditor.json'}) => {"changed": true, "checksum": "fbcdad9219733b81ad969426553906c1a8648897", "dest": "/var/lib/kolla/config_files/swift_object_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_auditor.json", "value": {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "45f7348541b64a76aec07477ea1d7358", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529527.26-244739813484188/source", "state": "file", "uid": 0} >2018-07-13 20:52:08,426 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_l3_agent.json'}) => {"changed": true, "checksum": "cd233477dc9defd8028ac1a8fe736b8c9fcde9f8", "dest": "/var/lib/kolla/config_files/neutron_l3_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_l3_agent.json", "value": {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "md5sum": "b47a8dc2601f0e1c404b9009d1c99c32", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 634, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529527.85-234109726021994/source", "state": "file", "uid": 0} >2018-07-13 20:52:09,010 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-listener', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_listener.json'}) => {"changed": true, "checksum": "a7135286aba5eb111dc77c913fc1f7dc0977e783", "dest": "/var/lib/kolla/config_files/aodh_listener.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_listener.json", "value": {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "ff2b7ae2bb8061a36a8223f5c34a970b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 244, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529528.43-41167886496934/source", "state": "file", "uid": 0} >2018-07-13 20:52:09,597 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_server.json'}) => {"changed": true, "checksum": "1f5cc060becbca7be3515f39537993b91e109a6d", "dest": "/var/lib/kolla/config_files/swift_container_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_server.json", "value": {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "59a9944c2c3c07fec0293d2efd7d8082", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 203, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529529.02-192593336425792/source", "state": "file", "uid": 0} >2018-07-13 20:52:10,200 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-evaluator', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_evaluator.json'}) => {"changed": true, "checksum": "596ee1b7f45471d04a0bc3d985f82ad722631b98", "dest": "/var/lib/kolla/config_files/aodh_evaluator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_evaluator.json", "value": {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "94c5432632bf2acca69de0063414183b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 245, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529529.61-137646864611789/source", "state": "file", "uid": 0} >2018-07-13 20:52:10,799 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529530.21-247405212626147/source", "state": "file", "uid": 0} >2018-07-13 20:52:11,400 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": true, "checksum": "40f9ceb4dd2fc8e9c51bf5152a0fa8e1d16d9137", "dest": "/var/lib/kolla/config_files/iscsid.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "md5sum": "9cd3c2dc0153b127d70141dadfabd12c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 175, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529530.81-234775554755940/source", "state": "file", "uid": 0} >2018-07-13 20:52:11,992 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-statsd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_statsd.json'}) => {"changed": true, "checksum": "1a38774f0fed561a8f1ad8c7f0a976a71a7f7008", "dest": "/var/lib/kolla/config_files/gnocchi_statsd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_statsd.json", "value": {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "b98425b2f26d4e30448a72685b1f89ad", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 470, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529531.41-205512293395851/source", "state": "file", "uid": 0} >2018-07-13 20:52:12,593 p=5867 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'apache:apache', 'path': u'/var/log/horizon/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/etc/openstack-dashboard/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, 'key': '/var/lib/kolla/config_files/horizon.json'}) => {"changed": true, "checksum": "fc55910103403d0bb92e62e940dbd536aff43f84", "dest": "/var/lib/kolla/config_files/horizon.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/horizon.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}}, "md5sum": "77504b6ea1f544f3c70dbc4115bfc354", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 587, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529532.0-32631045237134/source", "state": "file", "uid": 0} >2018-07-13 20:52:12,668 p=5867 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-07-13 20:52:12,668 p=5867 u=mistral | Friday 13 July 2018 20:52:12 -0400 (0:00:39.654) 0:05:35.856 *********** >2018-07-13 20:52:12,682 p=5867 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-07-13 20:52:12,705 p=5867 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-07-13 20:52:12,729 p=5867 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-07-13 20:52:12,753 p=5867 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-07-13 20:52:12,753 p=5867 u=mistral | Friday 13 July 2018 20:52:12 -0400 (0:00:00.085) 0:05:35.941 *********** >2018-07-13 20:52:13,382 p=5867 u=mistral | changed: [controller-0] => (item={'value': [{'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', 'config_volume': u'keystone_init_tasks', 'step_config': u'include ::tripleo::profile::base::keystone', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-07-13.3'}], 'key': u'step_3'}) => {"changed": true, "checksum": "c38b4cf3d0833ea42d55d34421d7c7eb3893e69c", "dest": "/var/lib/docker-puppet/docker-puppet-tasks3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-07-13.3", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]}, "md5sum": "b24700505643d4796d45b0c57b92474a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 397, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529532.81-6434917221564/source", "state": "file", "uid": 0} >2018-07-13 20:52:13,406 p=5867 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-07-13 20:52:13,406 p=5867 u=mistral | Friday 13 July 2018 20:52:13 -0400 (0:00:00.652) 0:05:36.594 *********** >2018-07-13 20:52:13,437 p=5867 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:52:13,462 p=5867 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:52:13,477 p=5867 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:52:13,499 p=5867 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-07-13 20:52:13,500 p=5867 u=mistral | Friday 13 July 2018 20:52:13 -0400 (0:00:00.093) 0:05:36.688 *********** >2018-07-13 20:52:14,190 p=5867 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529533.54-19678689901875/source", "state": "file", "uid": 0} >2018-07-13 20:52:14,236 p=5867 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529533.57-30154151489166/source", "state": "file", "uid": 0} >2018-07-13 20:52:14,237 p=5867 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1531529533.6-152620066244198/source", "state": "file", "uid": 0} >2018-07-13 20:52:14,262 p=5867 u=mistral | TASK [Run puppet host configuration for step 1] ******************************** >2018-07-13 20:52:14,262 p=5867 u=mistral | Friday 13 July 2018 20:52:14 -0400 (0:00:00.762) 0:05:37.450 *********** >2018-07-13 20:52:29,412 p=5867 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-07-13 20:52:32,390 p=5867 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-07-13 20:53:39,712 p=5867 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-07-13 20:53:39,738 p=5867 u=mistral | TASK [Debug output for task which failed: Run puppet host configuration for step 1] *** >2018-07-13 20:53:39,738 p=5867 u=mistral | Friday 13 July 2018 20:53:39 -0400 (0:01:25.476) 0:07:02.926 *********** >2018-07-13 20:53:39,857 p=5867 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 1.78 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}c1d92fa159fef3afd721be5f86af886d'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/content: content changed '{md5}e4ca381035a34b7a852184cc0dd89baa' to '{md5}c79354b8dbee09e62bbc3fb544853283'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}7cf491166a96e77d1966b29ed2d7cc4d'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '65536' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '65536' to '500000'", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_osd]/Tripleo::Firewall::Rule[111 ceph_osd]/Firewall[111 ceph_osd ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_osd]/Tripleo::Firewall::Rule[111 ceph_osd]/Firewall[111 ceph_osd ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 6.40 seconds", > "Changes:", > " Total: 92", > "Events:", > " Success: 92", > "Resources:", > " Total: 135", > " Restarted: 3", > " Out of sync: 92", > " Changed: 92", > "Time:", > " Concat file: 0.00", > " Cron: 0.00", > " Anchor: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " Sysctl: 0.05", > " Sysctl runtime: 0.16", > " Package: 0.22", > " File: 0.22", > " Service: 1.24", > " Firewall: 1.44", > " Exec: 1.85", > " Last run: 1531529549", > " Config retrieval: 2.00", > " Total: 7.22", > " Filebucket: 0.00", > " Concat fragment: 0.00", > "Version:", > " Config: 1531529541", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-07-13 20:53:39,867 p=5867 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.05 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}c1d92fa159fef3afd721be5f86af886d'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/content: content changed '{md5}e4ca381035a34b7a852184cc0dd89baa' to '{md5}c79354b8dbee09e62bbc3fb544853283'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Tuned/Exec[tuned-adm]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}7cf491166a96e77d1966b29ed2d7cc4d'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 9.22 seconds", > "Changes:", > " Total: 99", > "Events:", > " Success: 99", > "Resources:", > " Total: 141", > " Restarted: 3", > " Out of sync: 99", > " Changed: 99", > "Time:", > " Concat fragment: 0.00", > " Filebucket: 0.00", > " Concat file: 0.00", > " Cron: 0.00", > " Anchor: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " Sysctl: 0.07", > " File: 0.22", > " Package: 0.24", > " Sysctl runtime: 0.25", > " Service: 1.19", > " Total: 10.38", > " Last run: 1531529552", > " Firewall: 2.22", > " Config retrieval: 2.35", > " Exec: 3.82", > "Version:", > " Config: 1531529540", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-07-13 20:53:39,886 p=5867 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.73 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}c1d92fa159fef3afd721be5f86af886d'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/ensure: created", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/content: content changed '{md5}e4ca381035a34b7a852184cc0dd89baa' to '{md5}c79354b8dbee09e62bbc3fb544853283'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}7cf491166a96e77d1966b29ed2d7cc4d'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/ensure: created", > "Notice: /Stage[main]/Pacemaker::Service/Service[pcsd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/password: changed password", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/groups: groups changed '' to ['haclient']", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/owner: owner changed 'root' to 'hacluster'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/group: group changed 'root' to 'haclient'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/mode: mode changed '0755' to '0750'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/content: content changed '{md5}46b683effc3af148ec6a230228b30661' to '{md5}a63125c1fadf87c8776073b186072e7f'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/mode: mode changed '0400' to '0640'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Service/Service[corosync]/enable: enable changed 'false' to 'true'", > "Notice: /Stage[main]/Pacemaker::Service/Service[pacemaker]/enable: enable changed 'false' to 'true'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: executed successfully", > "Notice: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Pacemaker::Stonith/Pacemaker::Property[Disable STONITH]/Pcmk_property[property--stonith-enabled]/ensure: created", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 75.51 seconds", > "Changes:", > " Total: 169", > "Events:", > " Success: 169", > "Resources:", > " Changed: 165", > " Out of sync: 165", > " Total: 216", > " Restarted: 5", > "Time:", > " Concat file: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " File line: 0.00", > " Package manifest: 0.00", > " Augeas: 0.03", > " User: 0.04", > " Sysctl: 0.06", > " File: 0.12", > " Package: 0.35", > " Sysctl runtime: 0.36", > " Pcmk property: 1.15", > " Firewall: 14.24", > " Last run: 1531529619", > " Service: 2.83", > " Config retrieval: 3.18", > " Exec: 53.23", > " Filebucket: 0.00", > " Total: 75.60", > " Concat fragment: 0.00", > "Version:", > " Config: 1531529540", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-07-13 20:53:39,916 p=5867 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 1] ***************** >2018-07-13 20:53:39,917 p=5867 u=mistral | Friday 13 July 2018 20:53:39 -0400 (0:00:00.178) 0:07:03.105 *********** >2018-07-13 20:54:01,781 p=5867 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:54:35,079 p=5867 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:56:21,265 p=5867 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:56:21,287 p=5867 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 1] *** >2018-07-13 20:56:21,288 p=5867 u=mistral | Friday 13 July 2018 20:56:21 -0400 (0:02:41.370) 0:09:44.476 *********** >2018-07-13 20:56:21,397 p=5867 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-07-14 00:53:40,962 INFO: 20519 -- Running docker-puppet", > "2018-07-14 00:53:40,962 DEBUG: 20519 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-07-14 00:53:40,962 DEBUG: 20519 -- config_volume crond", > "2018-07-14 00:53:40,963 DEBUG: 20519 -- puppet_tags ", > "2018-07-14 00:53:40,963 DEBUG: 20519 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-07-14 00:53:40,963 DEBUG: 20519 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", > "2018-07-14 00:53:40,963 DEBUG: 20519 -- volumes []", > "2018-07-14 00:53:40,963 DEBUG: 20519 -- Adding new service", > "2018-07-14 00:53:40,963 INFO: 20519 -- Service compilation completed.", > "2018-07-14 00:53:40,964 DEBUG: 20519 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3', []]", > "2018-07-14 00:53:40,964 INFO: 20519 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-07-14 00:53:40,981 INFO: 20521 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", > "2018-07-14 00:53:40,981 DEBUG: 20521 -- config_volume crond", > "2018-07-14 00:53:40,982 DEBUG: 20521 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-07-14 00:53:40,982 DEBUG: 20521 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-07-14 00:53:40,982 DEBUG: 20521 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", > "2018-07-14 00:53:40,982 DEBUG: 20521 -- volumes []", > "2018-07-14 00:53:40,983 INFO: 20521 -- Removing container: docker-puppet-crond", > "2018-07-14 00:53:41,074 INFO: 20521 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", > "2018-07-14 00:53:54,761 DEBUG: 20521 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "d02c3bd49e78: Pulling fs layer", > "475b0168c252: Pulling fs layer", > "98a4cb0b02ef: Pulling fs layer", > "67ba27f668e7: Pulling fs layer", > "67ba27f668e7: Waiting", > "475b0168c252: Verifying Checksum", > "475b0168c252: Download complete", > "67ba27f668e7: Verifying Checksum", > "67ba27f668e7: Download complete", > "98a4cb0b02ef: Verifying Checksum", > "98a4cb0b02ef: Download complete", > "d02c3bd49e78: Download complete", > "d02c3bd49e78: Pull complete", > "475b0168c252: Pull complete", > "98a4cb0b02ef: Pull complete", > "67ba27f668e7: Pull complete", > "Digest: sha256:2fd3b666f7247ced06a7fe1bfd5cc9b639c221a94e5e00f16aac56fa8e534d4e", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", > "", > "2018-07-14 00:53:54,764 DEBUG: 20521 -- NET_HOST enabled", > "2018-07-14 00:53:54,764 DEBUG: 20521 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=ceph-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpqMnVXJ:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", > "2018-07-14 00:54:02,189 DEBUG: 20521 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 0.53 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}5281f207697925ddab4d83d74a751eb4'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.04 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Changed: 2", > " Out of sync: 2", > " Skipped: 7", > " Total: 9", > "Time:", > " File: 0.00", > " Cron: 0.01", > " Config retrieval: 0.60", > " Total: 0.61", > " Last run: 1531529641", > "Version:", > " Config: 1531529640", > " Puppet: 4.8.2", > "Gathering files modified after 2018-07-14 00:53:54.978291946 +0000", > "2018-07-14 00:54:02,190 DEBUG: 20521 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=ceph-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:53:54.978291946 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/crond --mtime=1970-01-01", > "+ md5sum", > "+ awk '{print $1}'", > "tar: Removing leading `/' from member names", > "+ tar -c -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-07-14 00:54:02,190 INFO: 20521 -- Removing container: docker-puppet-crond", > "2018-07-14 00:54:02,230 DEBUG: 20521 -- docker-puppet-crond", > "2018-07-14 00:54:02,231 INFO: 20521 -- Finished processing puppet configs for crond", > "2018-07-14 00:54:02,232 DEBUG: 20519 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-07-14 00:54:02,232 DEBUG: 20519 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-07-14 00:54:02,235 DEBUG: 20519 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-07-14 00:54:02,235 DEBUG: 20519 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-07-14 00:54:02,235 DEBUG: 20519 -- Updating config hash for logrotate_crond, config_volume=crond hash=cb412f198e239484d8de1f437d80aa02" > ] >} >2018-07-13 20:56:21,484 p=5867 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-07-14 00:53:40,950 INFO: 24567 -- Running docker-puppet", > "2018-07-14 00:53:40,950 DEBUG: 24567 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-07-14 00:53:40,951 DEBUG: 24567 -- config_volume ceilometer", > "2018-07-14 00:53:40,951 DEBUG: 24567 -- puppet_tags ceilometer_config", > "2018-07-14 00:53:40,951 DEBUG: 24567 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "", > "2018-07-14 00:53:40,951 DEBUG: 24567 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3", > "2018-07-14 00:53:40,951 DEBUG: 24567 -- volumes []", > "2018-07-14 00:53:40,951 DEBUG: 24567 -- Adding new service", > "2018-07-14 00:53:40,952 DEBUG: 24567 -- config_volume neutron", > "2018-07-14 00:53:40,952 DEBUG: 24567 -- puppet_tags neutron_plugin_ml2", > "2018-07-14 00:53:40,952 DEBUG: 24567 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "2018-07-14 00:53:40,952 DEBUG: 24567 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", > "2018-07-14 00:53:40,952 DEBUG: 24567 -- volumes []", > "2018-07-14 00:53:40,952 DEBUG: 24567 -- Adding new service", > "2018-07-14 00:53:40,952 DEBUG: 24567 -- puppet_tags neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-07-14 00:53:40,952 DEBUG: 24567 -- manifest include ::tripleo::profile::base::neutron::ovs", > "2018-07-14 00:53:40,953 DEBUG: 24567 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-07-14 00:53:40,953 DEBUG: 24567 -- Existing service, appending puppet tags and manifest", > "2018-07-14 00:53:40,953 DEBUG: 24567 -- config_volume iscsid", > "2018-07-14 00:53:40,953 DEBUG: 24567 -- puppet_tags iscsid_config", > "2018-07-14 00:53:40,953 DEBUG: 24567 -- manifest include ::tripleo::profile::base::iscsid", > "2018-07-14 00:53:40,953 DEBUG: 24567 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-07-13.3", > "2018-07-14 00:53:40,953 DEBUG: 24567 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-07-14 00:53:40,953 DEBUG: 24567 -- Adding new service", > "2018-07-14 00:53:40,953 DEBUG: 24567 -- config_volume nova_libvirt", > "2018-07-14 00:53:40,953 DEBUG: 24567 -- puppet_tags nova_config,nova_paste_api_ini", > "2018-07-14 00:53:40,954 DEBUG: 24567 -- manifest # TODO(emilien): figure how to deal with libvirt profile.", > "# We'll probably treat it like we do with Neutron plugins.", > "# Until then, just include it in the default nova-compute role.", > "include tripleo::profile::base::nova::compute::libvirt", > "include ::tripleo::profile::base::database::mysql::client", > "2018-07-14 00:53:40,954 DEBUG: 24567 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-07-13.3", > "2018-07-14 00:53:40,954 DEBUG: 24567 -- volumes []", > "2018-07-14 00:53:40,954 DEBUG: 24567 -- Adding new service", > "2018-07-14 00:53:40,954 DEBUG: 24567 -- config_volume nova_libvirt", > "2018-07-14 00:53:40,954 DEBUG: 24567 -- puppet_tags libvirtd_config,nova_config,file,libvirt_tls_password", > "2018-07-14 00:53:40,954 DEBUG: 24567 -- manifest include tripleo::profile::base::nova::libvirt", > "2018-07-14 00:53:40,954 DEBUG: 24567 -- Existing service, appending puppet tags and manifest", > "2018-07-14 00:53:40,954 DEBUG: 24567 -- puppet_tags ", > "2018-07-14 00:53:40,954 DEBUG: 24567 -- manifest include ::tripleo::profile::base::sshd", > "include tripleo::profile::base::nova::migration::target", > "2018-07-14 00:53:40,954 DEBUG: 24567 -- config_volume crond", > "2018-07-14 00:53:40,955 DEBUG: 24567 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-07-14 00:53:40,955 DEBUG: 24567 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", > "2018-07-14 00:53:40,955 DEBUG: 24567 -- volumes []", > "2018-07-14 00:53:40,955 DEBUG: 24567 -- Adding new service", > "2018-07-14 00:53:40,955 INFO: 24567 -- Service compilation completed.", > "2018-07-14 00:53:40,956 DEBUG: 24567 -- - [u'ceilometer', u'file,file_line,concat,augeas,cron,ceilometer_config', u'include ::tripleo::profile::base::ceilometer::agent::polling\\n', u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3', []]", > "2018-07-14 00:53:40,956 DEBUG: 24567 -- - [u'nova_libvirt', u'file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password', u\"# TODO(emilien): figure how to deal with libvirt profile.\\n# We'll probably treat it like we do with Neutron plugins.\\n# Until then, just include it in the default nova-compute role.\\ninclude tripleo::profile::base::nova::compute::libvirt\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::libvirt\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::sshd\\ninclude tripleo::profile::base::nova::migration::target\", u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-07-13.3', []]", > "2018-07-14 00:53:40,956 DEBUG: 24567 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3', []]", > "2018-07-14 00:53:40,956 DEBUG: 24567 -- - [u'neutron', u'file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2', u'include ::tripleo::profile::base::neutron::plugins::ml2\\n\\ninclude ::tripleo::profile::base::neutron::ovs\\n', u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3', [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']]", > "2018-07-14 00:53:40,956 DEBUG: 24567 -- - [u'iscsid', u'file,file_line,concat,augeas,cron,iscsid_config', u'include ::tripleo::profile::base::iscsid', u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-07-13.3', [u'/etc/iscsi:/etc/iscsi']]", > "2018-07-14 00:53:40,956 INFO: 24567 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-07-14 00:53:40,969 INFO: 24568 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3", > "2018-07-14 00:53:40,969 INFO: 24569 -- Starting configuration of nova_libvirt using image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-07-13.3", > "2018-07-14 00:53:40,970 DEBUG: 24568 -- config_volume ceilometer", > "2018-07-14 00:53:40,970 DEBUG: 24568 -- puppet_tags file,file_line,concat,augeas,cron,ceilometer_config", > "2018-07-14 00:53:40,970 DEBUG: 24569 -- config_volume nova_libvirt", > "2018-07-14 00:53:40,970 DEBUG: 24568 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "2018-07-14 00:53:40,970 DEBUG: 24569 -- puppet_tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password", > "2018-07-14 00:53:40,970 DEBUG: 24568 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3", > "2018-07-14 00:53:40,970 DEBUG: 24569 -- manifest # TODO(emilien): figure how to deal with libvirt profile.", > "include tripleo::profile::base::nova::libvirt", > "include ::tripleo::profile::base::sshd", > "2018-07-14 00:53:40,970 DEBUG: 24568 -- volumes []", > "2018-07-14 00:53:40,970 DEBUG: 24569 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-07-13.3", > "2018-07-14 00:53:40,970 DEBUG: 24569 -- volumes []", > "2018-07-14 00:53:40,971 INFO: 24570 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", > "2018-07-14 00:53:40,971 DEBUG: 24570 -- config_volume crond", > "2018-07-14 00:53:40,971 INFO: 24569 -- Removing container: docker-puppet-nova_libvirt", > "2018-07-14 00:53:40,971 DEBUG: 24570 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-07-14 00:53:40,971 DEBUG: 24570 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-07-14 00:53:40,972 DEBUG: 24570 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", > "2018-07-14 00:53:40,972 DEBUG: 24570 -- volumes []", > "2018-07-14 00:53:40,972 INFO: 24570 -- Removing container: docker-puppet-crond", > "2018-07-14 00:53:40,973 INFO: 24568 -- Removing container: docker-puppet-ceilometer", > "2018-07-14 00:53:41,071 INFO: 24569 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-07-13.3", > "2018-07-14 00:53:41,071 INFO: 24570 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", > "2018-07-14 00:53:41,074 INFO: 24568 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3", > "2018-07-14 00:53:55,333 DEBUG: 24570 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "d02c3bd49e78: Pulling fs layer", > "475b0168c252: Pulling fs layer", > "98a4cb0b02ef: Pulling fs layer", > "67ba27f668e7: Pulling fs layer", > "67ba27f668e7: Waiting", > "475b0168c252: Verifying Checksum", > "475b0168c252: Download complete", > "98a4cb0b02ef: Verifying Checksum", > "98a4cb0b02ef: Download complete", > "d02c3bd49e78: Verifying Checksum", > "d02c3bd49e78: Download complete", > "67ba27f668e7: Verifying Checksum", > "67ba27f668e7: Download complete", > "d02c3bd49e78: Pull complete", > "475b0168c252: Pull complete", > "98a4cb0b02ef: Pull complete", > "67ba27f668e7: Pull complete", > "Digest: sha256:2fd3b666f7247ced06a7fe1bfd5cc9b639c221a94e5e00f16aac56fa8e534d4e", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", > "2018-07-14 00:53:55,337 DEBUG: 24570 -- NET_HOST enabled", > "2018-07-14 00:53:55,337 DEBUG: 24570 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpDkDwo0:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", > "2018-07-14 00:54:00,897 DEBUG: 24568 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-central ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-central", > "1b52dc9b90b4: Pulling fs layer", > "5cdb8407851d: Pulling fs layer", > "2d6d2b1829e0: Pulling fs layer", > "1b52dc9b90b4: Waiting", > "5cdb8407851d: Waiting", > "2d6d2b1829e0: Waiting", > "1b52dc9b90b4: Verifying Checksum", > "1b52dc9b90b4: Download complete", > "5cdb8407851d: Verifying Checksum", > "5cdb8407851d: Download complete", > "2d6d2b1829e0: Verifying Checksum", > "2d6d2b1829e0: Download complete", > "1b52dc9b90b4: Pull complete", > "5cdb8407851d: Pull complete", > "2d6d2b1829e0: Pull complete", > "Digest: sha256:9806c986ccd96861ec0dfb6a2d768a8df3a0d7a03b629c5ca436bea04c217565", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3", > "2018-07-14 00:54:00,900 DEBUG: 24568 -- NET_HOST enabled", > "2018-07-14 00:54:00,901 DEBUG: 24568 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-ceilometer --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config --env NAME=ceilometer --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpaiIIip:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3", > "2018-07-14 00:54:02,826 DEBUG: 24570 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 0.52 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}5281f207697925ddab4d83d74a751eb4'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.04 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Changed: 2", > " Out of sync: 2", > " Skipped: 7", > " Total: 9", > "Time:", > " File: 0.00", > " Cron: 0.01", > " Config retrieval: 0.64", > " Total: 0.65", > " Last run: 1531529641", > "Version:", > " Config: 1531529641", > " Puppet: 4.8.2", > "Gathering files modified after 2018-07-14 00:53:55.660489002 +0000", > "2018-07-14 00:54:02,827 DEBUG: 24570 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=compute-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:53:55.660489002 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/crond --mtime=1970-01-01", > "+ awk '{print $1}'", > "+ md5sum", > "tar: Removing leading `/' from member names", > "+ tar -c -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-07-14 00:54:02,827 INFO: 24570 -- Removing container: docker-puppet-crond", > "2018-07-14 00:54:02,910 DEBUG: 24570 -- docker-puppet-crond", > "2018-07-14 00:54:02,911 INFO: 24570 -- Finished processing puppet configs for crond", > "2018-07-14 00:54:02,911 INFO: 24570 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", > "2018-07-14 00:54:02,911 DEBUG: 24570 -- config_volume neutron", > "2018-07-14 00:54:02,911 DEBUG: 24570 -- puppet_tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-07-14 00:54:02,911 DEBUG: 24570 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "include ::tripleo::profile::base::neutron::ovs", > "2018-07-14 00:54:02,911 DEBUG: 24570 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", > "2018-07-14 00:54:02,911 DEBUG: 24570 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-07-14 00:54:02,912 INFO: 24570 -- Removing container: docker-puppet-neutron", > "2018-07-14 00:54:03,007 INFO: 24570 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", > "2018-07-14 00:54:09,630 DEBUG: 24568 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.20 seconds", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/filter_project]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/archive_policy]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/resources_definition_file]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/interface]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created", > "Notice: Applied catalog in 0.69 seconds", > " Total: 27", > " Success: 27", > " Total: 139", > " Skipped: 22", > " Out of sync: 27", > " Changed: 27", > " Ceilometer config: 0.57", > " Config retrieval: 1.41", > " Total: 1.98", > " Last run: 1531529648", > " Resources: 0.00", > " Config: 1531529646", > "Gathering files modified after 2018-07-14 00:54:01.205489002 +0000", > "2018-07-14 00:54:09,630 DEBUG: 24568 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,ceilometer_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,ceilometer_config'", > "+ origin_of_time=/var/lib/config-data/ceilometer.origin_of_time", > "+ touch /var/lib/config-data/ceilometer.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,ceilometer_config /etc/config.pp", > "Warning: ModuleLoader: module 'ceilometer' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/config.pp\", 35]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer.pp\", 111]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > "Warning: Scope(Class[Ceilometer::Dispatcher::Gnocchi]): The class ceilometer::dispatcher::gnocchi is deprecated. All its", > " options must be set as url parameters in", > " ceilometer::agent::notification::pipeline_publishers. Depending of the used", > " Gnocchi version their might be ignored.", > "Warning: ModuleLoader: module 'oslo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/ceilometer", > "++ stat -c %y /var/lib/config-data/ceilometer.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:54:01.205489002 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/ceilometer", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/ceilometer", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/ceilometer.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/ceilometer --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/ceilometer --mtime=1970-01-01", > "2018-07-14 00:54:09,630 INFO: 24568 -- Removing container: docker-puppet-ceilometer", > "2018-07-14 00:54:09,685 DEBUG: 24568 -- docker-puppet-ceilometer", > "2018-07-14 00:54:09,685 INFO: 24568 -- Finished processing puppet configs for ceilometer", > "2018-07-14 00:54:09,686 INFO: 24568 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-07-13.3", > "2018-07-14 00:54:09,686 DEBUG: 24568 -- config_volume iscsid", > "2018-07-14 00:54:09,686 DEBUG: 24568 -- puppet_tags file,file_line,concat,augeas,cron,iscsid_config", > "2018-07-14 00:54:09,686 DEBUG: 24568 -- manifest include ::tripleo::profile::base::iscsid", > "2018-07-14 00:54:09,686 DEBUG: 24568 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-07-13.3", > "2018-07-14 00:54:09,686 DEBUG: 24568 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-07-14 00:54:09,686 INFO: 24568 -- Removing container: docker-puppet-iscsid", > "2018-07-14 00:54:09,785 INFO: 24568 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-07-13.3", > "2018-07-14 00:54:10,446 DEBUG: 24570 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-server ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-server", > "d02c3bd49e78: Already exists", > "475b0168c252: Already exists", > "98a4cb0b02ef: Already exists", > "1b52dc9b90b4: Already exists", > "28e21e52f8ed: Pulling fs layer", > "f5518f3fd279: Pulling fs layer", > "f5518f3fd279: Verifying Checksum", > "f5518f3fd279: Download complete", > "28e21e52f8ed: Verifying Checksum", > "28e21e52f8ed: Download complete", > "28e21e52f8ed: Pull complete", > "f5518f3fd279: Pull complete", > "Digest: sha256:55b94d798a314329ba8115df66256b4d8917ec23b2c18dfe2c5135022a98c7de", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", > "2018-07-14 00:54:10,449 DEBUG: 24570 -- NET_HOST enabled", > "2018-07-14 00:54:10,449 DEBUG: 24570 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-neutron --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 --env NAME=neutron --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpEZZ07x:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", > "2018-07-14 00:54:10,630 DEBUG: 24568 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-iscsid ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-iscsid", > "4af463e6498b: Pulling fs layer", > "4af463e6498b: Verifying Checksum", > "4af463e6498b: Download complete", > "4af463e6498b: Pull complete", > "Digest: sha256:dde55bcf49dac3034a5370d8b718c4ce390c0e383e4a790f6b503a9a6c58ea2b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-07-13.3", > "2018-07-14 00:54:10,637 DEBUG: 24568 -- NET_HOST enabled", > "2018-07-14 00:54:10,637 DEBUG: 24568 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-iscsid --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp_DyFgv:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/iscsi:/etc/iscsi --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-07-13.3", > "2018-07-14 00:54:17,209 DEBUG: 24568 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 0.50 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created", > " Total: 10", > " Skipped: 8", > " Exec: 0.02", > " Total: 0.66", > " Last run: 1531529656", > " Config: 1531529655", > "Gathering files modified after 2018-07-14 00:54:11.065489002 +0000", > "2018-07-14 00:54:17,209 DEBUG: 24568 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,iscsid_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,iscsid_config'", > "+ origin_of_time=/var/lib/config-data/iscsid.origin_of_time", > "+ touch /var/lib/config-data/iscsid.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,iscsid_config /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/iscsid", > "++ stat -c %y /var/lib/config-data/iscsid.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:54:11.065489002 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/iscsid", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/iscsid", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/iscsid.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/iscsid --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/iscsid --mtime=1970-01-01", > "2018-07-14 00:54:17,209 INFO: 24568 -- Removing container: docker-puppet-iscsid", > "2018-07-14 00:54:17,261 DEBUG: 24568 -- docker-puppet-iscsid", > "2018-07-14 00:54:17,261 INFO: 24568 -- Finished processing puppet configs for iscsid", > "2018-07-14 00:54:17,910 DEBUG: 24569 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-compute ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-compute", > "896eb5edb180: Pulling fs layer", > "bec2cbd2b911: Pulling fs layer", > "896eb5edb180: Waiting", > "bec2cbd2b911: Waiting", > "896eb5edb180: Verifying Checksum", > "896eb5edb180: Download complete", > "bec2cbd2b911: Verifying Checksum", > "bec2cbd2b911: Download complete", > "896eb5edb180: Pull complete", > "bec2cbd2b911: Pull complete", > "Digest: sha256:a70a5d518561ae929a6e5592669e59ec562a3463db0af3eb5e83767aef2407e6", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-07-13.3", > "2018-07-14 00:54:17,912 DEBUG: 24569 -- NET_HOST enabled", > "2018-07-14 00:54:17,913 DEBUG: 24569 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova_libvirt --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp0DTl9r:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-07-13.3", > "2018-07-14 00:54:20,340 DEBUG: 24570 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.59 seconds", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/vxlan_udp_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created", > "Notice: Applied catalog in 0.64 seconds", > " Total: 45", > " Success: 45", > " Total: 174", > " Skipped: 27", > " Out of sync: 45", > " Changed: 45", > " Neutron agent ovs: 0.01", > " Neutron plugin ml2: 0.03", > " Neutron config: 0.48", > " Last run: 1531529659", > " Config retrieval: 2.80", > " Total: 3.33", > "Gathering files modified after 2018-07-14 00:54:10.697489002 +0000", > "2018-07-14 00:54:20,340 DEBUG: 24570 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2'", > "+ origin_of_time=/var/lib/config-data/neutron.origin_of_time", > "+ touch /var/lib/config-data/neutron.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 /etc/config.pp", > "Warning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/neutron/manifests/init.pp\", 486]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/plugins/ml2.pp\", 45]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/config.pp\", 136]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp\", 141]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 207]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/ovs.pp\", 59]", > "+ rsync_srcs+=' /var/www'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/neutron", > "++ stat -c %y /var/lib/config-data/neutron.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:54:10.697489002 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/neutron", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/neutron", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/neutron.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/neutron --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/neutron --mtime=1970-01-01", > "2018-07-14 00:54:20,340 INFO: 24570 -- Removing container: docker-puppet-neutron", > "2018-07-14 00:54:20,379 DEBUG: 24570 -- docker-puppet-neutron", > "2018-07-14 00:54:20,379 INFO: 24570 -- Finished processing puppet configs for neutron", > "2018-07-14 00:54:35,424 DEBUG: 24569 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.64 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{md5}056b96e7e8124e1bc55f77cba4e68ce7' to '{md5}7635d77a9fa0002bdf31714938383ee8'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{md5}09c4fa846e8e27bfa3ab3325900d63ea' to '{md5}2f138c0278e1b666ec77a6d8ba3054a1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{md5}dff145cb4e519333c0096aae8de2e77c' to '{md5}26a7290a313d939b2480b08206946c3c'", > "Notice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/image_service]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[cinder/catalog_info]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[os_vif_linux_bridge/use_ipv6]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_api_faults]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/os_interface]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/heal_instance_info_cache_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/vncserver_proxyclient_address]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/keymap]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[glance/verify_glance_signatures]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Libvirtd_config[listen_tls]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Libvirtd_config[listen_tcp]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{md5}8b14de952988f11a27d151c5981b2f1a'", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/vncserver_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_group]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[auth_unix_ro]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[auth_unix_rw]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_ro_perms]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_rw_perms]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully", > "Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}40d961cd3154f0439fcac1a50bd77b96' to '{md5}3cd0eede37c506c8fc9deb3d490657e1'", > "Notice: Applied catalog in 8.56 seconds", > " Total: 104", > " Success: 104", > " Changed: 104", > " Out of sync: 104", > " Total: 317", > " Skipped: 48", > " Concat file: 0.00", > " Concat fragment: 0.00", > " File line: 0.00", > " Exec: 0.01", > " Libvirtd config: 0.02", > " File: 0.04", > " Package: 0.09", > " Augeas: 1.08", > " Total: 11.18", > " Last run: 1531529673", > " Config retrieval: 3.02", > " Nova config: 6.91", > " Config: 1531529662", > "Gathering files modified after 2018-07-14 00:54:18.101489002 +0000", > "2018-07-14 00:54:35,424 DEBUG: 24569 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password'", > "+ origin_of_time=/var/lib/config-data/nova_libvirt.origin_of_time", > "+ touch /var/lib/config-data/nova_libvirt.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password /etc/config.pp", > "ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute.pp\", 59]", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/config.pp\", 37]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 114]", > "Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release", > "Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/db.pp\", 126]:[\"/etc/puppet/modules/nova/manifests/init.pp\", 530]", > "Warning: Scope(Class[Nova]): nova::use_syslog, nova::use_stderr, nova::log_facility, nova::log_dir \\", > "and nova::debug is deprecated and has been moved to nova::logging class, please set them there.", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 540]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute.pp\", 59]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/placement.pp\", 101]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 138]", > "Warning: Scope(Class[Nova::Placement]): The os_region_name parameter is deprecated and will be removed \\", > "in a future release. Please use region_name instead.", > "Warning: Unknown variable: '::nova::vncproxy::host'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:31:5", > "Warning: Unknown variable: '::nova::vncproxy::vncproxy_protocol'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:36:5", > "Warning: Unknown variable: '::nova::vncproxy::port'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:41:5", > "Warning: Unknown variable: '::nova::vncproxy::vncproxy_path'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:46:5", > "Warning: Unknown variable: '::nova::compute::pci_passthrough'. at /etc/puppet/modules/nova/manifests/compute/pci.pp:19:38", > "Warning: Unknown variable: '::nova::api::default_floating_pool'. at /etc/puppet/modules/nova/manifests/network/neutron.pp:112:38", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/compute/libvirt.pp\", 278]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute/libvirt.pp\", 33]", > " with Stdlib::Compat::Ip_Address. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/migration/target.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/migration/target.pp\", 56]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Exec[set libvirt sasl credentials](provider=posix): Cannot understand environment setting \"TLS_PASSWORD=\"", > "+ rsync_srcs+=' /var/lib/nova/.ssh'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/nova/.ssh /var/lib/config-data/nova_libvirt", > "++ stat -c %y /var/lib/config-data/nova_libvirt.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:54:18.101489002 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova_libvirt", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova_libvirt", > "++ find /etc /root /opt /var/spool/cron /var/lib/nova/.ssh -newer /var/lib/config-data/nova_libvirt.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/nova_libvirt --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/nova_libvirt --mtime=1970-01-01", > "2018-07-14 00:54:35,424 INFO: 24569 -- Removing container: docker-puppet-nova_libvirt", > "2018-07-14 00:54:35,463 DEBUG: 24569 -- docker-puppet-nova_libvirt", > "2018-07-14 00:54:35,463 INFO: 24569 -- Finished processing puppet configs for nova_libvirt", > "2018-07-14 00:54:35,463 DEBUG: 24567 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-07-14 00:54:35,464 DEBUG: 24567 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-07-14 00:54:35,465 DEBUG: 24567 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-07-14 00:54:35,466 DEBUG: 24567 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-07-14 00:54:35,466 DEBUG: 24567 -- Updating config hash for neutron_ovs_bridge, config_volume=iscsid hash=2b3c0c1bfe1edbdecdd6cc9d7f9d5c01", > "2018-07-14 00:54:35,466 DEBUG: 24567 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-07-14 00:54:35,466 DEBUG: 24567 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-07-14 00:54:35,466 DEBUG: 24567 -- Updating config hash for nova_libvirt, config_volume=iscsid hash=0e5fcc3f05e6f6bf17361465c7f7e6e9", > "2018-07-14 00:54:35,466 DEBUG: 24567 -- Updating config hash for nova_virtlogd, config_volume=iscsid hash=0e5fcc3f05e6f6bf17361465c7f7e6e9", > "2018-07-14 00:54:35,468 DEBUG: 24567 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-07-14 00:54:35,468 DEBUG: 24567 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-07-14 00:54:35,468 DEBUG: 24567 -- Updating config hash for ceilometer_agent_compute, config_volume=iscsid hash=e1af01649bc1d07795ece0d7bb61c8f4", > "2018-07-14 00:54:35,468 DEBUG: 24567 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt/etc", > "2018-07-14 00:54:35,468 DEBUG: 24567 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-07-14 00:54:35,469 DEBUG: 24567 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-07-14 00:54:35,469 DEBUG: 24567 -- Updating config hash for neutron_ovs_agent, config_volume=iscsid hash=2b3c0c1bfe1edbdecdd6cc9d7f9d5c01", > "2018-07-14 00:54:35,469 DEBUG: 24567 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-07-14 00:54:35,469 DEBUG: 24567 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-07-14 00:54:35,469 DEBUG: 24567 -- Updating config hash for nova_migration_target, config_volume=iscsid hash=0e5fcc3f05e6f6bf17361465c7f7e6e9", > "2018-07-14 00:54:35,469 DEBUG: 24567 -- Updating config hash for nova_compute, config_volume=iscsid hash=0e5fcc3f05e6f6bf17361465c7f7e6e9", > "2018-07-14 00:54:35,469 DEBUG: 24567 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-07-14 00:54:35,469 DEBUG: 24567 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-07-14 00:54:35,469 DEBUG: 24567 -- Updating config hash for logrotate_crond, config_volume=iscsid hash=cb412f198e239484d8de1f437d80aa02" > ] >} >2018-07-13 20:56:22,399 p=5867 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-07-14 00:53:40,883 INFO: 9377 -- Running docker-puppet", > "2018-07-14 00:53:40,883 DEBUG: 9377 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-07-14 00:53:40,884 DEBUG: 9377 -- config_volume aodh", > "2018-07-14 00:53:40,884 DEBUG: 9377 -- puppet_tags aodh_api_paste_ini,aodh_config", > "2018-07-14 00:53:40,884 DEBUG: 9377 -- manifest include tripleo::profile::base::aodh::api", > "", > "include ::tripleo::profile::base::database::mysql::client", > "2018-07-14 00:53:40,884 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-07-13.3", > "2018-07-14 00:53:40,884 DEBUG: 9377 -- volumes []", > "2018-07-14 00:53:40,884 DEBUG: 9377 -- Adding new service", > "2018-07-14 00:53:40,884 DEBUG: 9377 -- puppet_tags aodh_config", > "2018-07-14 00:53:40,884 DEBUG: 9377 -- manifest include tripleo::profile::base::aodh::evaluator", > "2018-07-14 00:53:40,885 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-07-13.3", > "2018-07-14 00:53:40,885 DEBUG: 9377 -- volumes []", > "2018-07-14 00:53:40,885 DEBUG: 9377 -- Existing service, appending puppet tags and manifest", > "2018-07-14 00:53:40,885 DEBUG: 9377 -- config_volume aodh", > "2018-07-14 00:53:40,885 DEBUG: 9377 -- puppet_tags aodh_config", > "2018-07-14 00:53:40,885 DEBUG: 9377 -- manifest include tripleo::profile::base::aodh::listener", > "2018-07-14 00:53:40,885 DEBUG: 9377 -- manifest include tripleo::profile::base::aodh::notifier", > "2018-07-14 00:53:40,885 DEBUG: 9377 -- config_volume ceilometer", > "2018-07-14 00:53:40,885 DEBUG: 9377 -- puppet_tags ceilometer_config", > "2018-07-14 00:53:40,885 DEBUG: 9377 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "2018-07-14 00:53:40,885 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3", > "2018-07-14 00:53:40,886 DEBUG: 9377 -- Adding new service", > "2018-07-14 00:53:40,886 DEBUG: 9377 -- config_volume ceilometer", > "2018-07-14 00:53:40,886 DEBUG: 9377 -- puppet_tags ceilometer_config", > "2018-07-14 00:53:40,886 DEBUG: 9377 -- manifest include ::tripleo::profile::base::ceilometer::agent::notification", > "2018-07-14 00:53:40,886 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3", > "2018-07-14 00:53:40,886 DEBUG: 9377 -- volumes []", > "2018-07-14 00:53:40,886 DEBUG: 9377 -- Existing service, appending puppet tags and manifest", > "2018-07-14 00:53:40,886 DEBUG: 9377 -- config_volume cinder", > "2018-07-14 00:53:40,886 DEBUG: 9377 -- puppet_tags cinder_config,file,concat,file_line", > "2018-07-14 00:53:40,886 DEBUG: 9377 -- manifest include ::tripleo::profile::base::cinder::api", > "2018-07-14 00:53:40,886 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-07-13.3", > "2018-07-14 00:53:40,886 DEBUG: 9377 -- manifest include ::tripleo::profile::base::cinder::backup::ceph", > "2018-07-14 00:53:40,887 DEBUG: 9377 -- manifest include ::tripleo::profile::base::cinder::scheduler", > "2018-07-14 00:53:40,887 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-07-13.3", > "2018-07-14 00:53:40,887 DEBUG: 9377 -- volumes []", > "2018-07-14 00:53:40,887 DEBUG: 9377 -- Existing service, appending puppet tags and manifest", > "2018-07-14 00:53:40,887 DEBUG: 9377 -- config_volume cinder", > "2018-07-14 00:53:40,887 DEBUG: 9377 -- puppet_tags cinder_config,file,concat,file_line", > "2018-07-14 00:53:40,887 DEBUG: 9377 -- manifest include ::tripleo::profile::base::lvm", > "include ::tripleo::profile::base::cinder::volume", > "2018-07-14 00:53:40,887 DEBUG: 9377 -- config_volume clustercheck", > "2018-07-14 00:53:40,887 DEBUG: 9377 -- puppet_tags file", > "2018-07-14 00:53:40,887 DEBUG: 9377 -- manifest include ::tripleo::profile::pacemaker::clustercheck", > "2018-07-14 00:53:40,887 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3", > "2018-07-14 00:53:40,887 DEBUG: 9377 -- Adding new service", > "2018-07-14 00:53:40,887 DEBUG: 9377 -- config_volume glance_api", > "2018-07-14 00:53:40,887 DEBUG: 9377 -- puppet_tags glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config", > "2018-07-14 00:53:40,887 DEBUG: 9377 -- manifest include ::tripleo::profile::base::glance::api", > "2018-07-14 00:53:40,887 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-07-13.3", > "2018-07-14 00:53:40,888 DEBUG: 9377 -- Adding new service", > "2018-07-14 00:53:40,888 DEBUG: 9377 -- config_volume gnocchi", > "2018-07-14 00:53:40,888 DEBUG: 9377 -- puppet_tags gnocchi_api_paste_ini,gnocchi_config", > "2018-07-14 00:53:40,888 DEBUG: 9377 -- manifest include ::tripleo::profile::base::gnocchi::api", > "2018-07-14 00:53:40,888 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-07-13.3", > "2018-07-14 00:53:40,888 DEBUG: 9377 -- volumes []", > "2018-07-14 00:53:40,888 DEBUG: 9377 -- puppet_tags gnocchi_config", > "2018-07-14 00:53:40,888 DEBUG: 9377 -- manifest include ::tripleo::profile::base::gnocchi::metricd", > "2018-07-14 00:53:40,888 DEBUG: 9377 -- Existing service, appending puppet tags and manifest", > "2018-07-14 00:53:40,888 DEBUG: 9377 -- manifest include ::tripleo::profile::base::gnocchi::statsd", > "2018-07-14 00:53:40,888 DEBUG: 9377 -- config_volume haproxy", > "2018-07-14 00:53:40,888 DEBUG: 9377 -- puppet_tags haproxy_config", > "2018-07-14 00:53:40,889 DEBUG: 9377 -- manifest exec {'wait-for-settle': command => '/bin/true' }", > "class tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}", > "['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::pacemaker::haproxy_bundle", > "2018-07-14 00:53:40,889 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-07-13.3", > "2018-07-14 00:53:40,889 DEBUG: 9377 -- volumes [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']", > "2018-07-14 00:53:40,889 DEBUG: 9377 -- Adding new service", > "2018-07-14 00:53:40,889 DEBUG: 9377 -- config_volume heat_api", > "2018-07-14 00:53:40,889 DEBUG: 9377 -- puppet_tags heat_config,file,concat,file_line", > "2018-07-14 00:53:40,889 DEBUG: 9377 -- manifest include ::tripleo::profile::base::heat::api", > "2018-07-14 00:53:40,889 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-07-13.3", > "2018-07-14 00:53:40,889 DEBUG: 9377 -- volumes []", > "2018-07-14 00:53:40,889 DEBUG: 9377 -- config_volume heat_api_cfn", > "2018-07-14 00:53:40,889 DEBUG: 9377 -- manifest include ::tripleo::profile::base::heat::api_cfn", > "2018-07-14 00:53:40,889 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-07-13.3", > "2018-07-14 00:53:40,889 DEBUG: 9377 -- config_volume heat", > "2018-07-14 00:53:40,889 DEBUG: 9377 -- manifest include ::tripleo::profile::base::heat::engine", > "2018-07-14 00:53:40,890 DEBUG: 9377 -- volumes []", > "2018-07-14 00:53:40,890 DEBUG: 9377 -- Adding new service", > "2018-07-14 00:53:40,890 DEBUG: 9377 -- config_volume horizon", > "2018-07-14 00:53:40,890 DEBUG: 9377 -- puppet_tags horizon_config", > "2018-07-14 00:53:40,890 DEBUG: 9377 -- manifest include ::tripleo::profile::base::horizon", > "2018-07-14 00:53:40,890 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-07-13.3", > "2018-07-14 00:53:40,890 DEBUG: 9377 -- config_volume iscsid", > "2018-07-14 00:53:40,890 DEBUG: 9377 -- puppet_tags iscsid_config", > "2018-07-14 00:53:40,890 DEBUG: 9377 -- manifest include ::tripleo::profile::base::iscsid", > "2018-07-14 00:53:40,890 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-07-13.3", > "2018-07-14 00:53:40,890 DEBUG: 9377 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-07-14 00:53:40,890 DEBUG: 9377 -- config_volume keystone", > "2018-07-14 00:53:40,890 DEBUG: 9377 -- puppet_tags keystone_config,keystone_domain_config", > "2018-07-14 00:53:40,890 DEBUG: 9377 -- manifest ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::base::keystone", > "2018-07-14 00:53:40,890 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-07-13.3", > "2018-07-14 00:53:40,891 DEBUG: 9377 -- config_volume memcached", > "2018-07-14 00:53:40,891 DEBUG: 9377 -- puppet_tags file", > "2018-07-14 00:53:40,891 DEBUG: 9377 -- manifest include ::tripleo::profile::base::memcached", > "2018-07-14 00:53:40,891 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-07-13.3", > "2018-07-14 00:53:40,891 DEBUG: 9377 -- volumes []", > "2018-07-14 00:53:40,891 DEBUG: 9377 -- Adding new service", > "2018-07-14 00:53:40,891 DEBUG: 9377 -- config_volume mysql", > "2018-07-14 00:53:40,891 DEBUG: 9377 -- manifest ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }", > "exec {'wait-for-settle': command => '/bin/true' }", > "include ::tripleo::profile::pacemaker::database::mysql_bundle", > "2018-07-14 00:53:40,891 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3", > "2018-07-14 00:53:40,891 DEBUG: 9377 -- config_volume neutron", > "2018-07-14 00:53:40,891 DEBUG: 9377 -- puppet_tags neutron_config,neutron_api_config", > "2018-07-14 00:53:40,891 DEBUG: 9377 -- manifest include tripleo::profile::base::neutron::server", > "2018-07-14 00:53:40,891 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", > "2018-07-14 00:53:40,891 DEBUG: 9377 -- puppet_tags neutron_plugin_ml2", > "2018-07-14 00:53:40,891 DEBUG: 9377 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "2018-07-14 00:53:40,892 DEBUG: 9377 -- volumes []", > "2018-07-14 00:53:40,892 DEBUG: 9377 -- Existing service, appending puppet tags and manifest", > "2018-07-14 00:53:40,892 DEBUG: 9377 -- config_volume neutron", > "2018-07-14 00:53:40,892 DEBUG: 9377 -- puppet_tags neutron_config,neutron_dhcp_agent_config", > "2018-07-14 00:53:40,892 DEBUG: 9377 -- manifest include tripleo::profile::base::neutron::dhcp", > "2018-07-14 00:53:40,892 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", > "2018-07-14 00:53:40,892 DEBUG: 9377 -- puppet_tags neutron_config,neutron_l3_agent_config", > "2018-07-14 00:53:40,892 DEBUG: 9377 -- manifest include tripleo::profile::base::neutron::l3", > "2018-07-14 00:53:40,892 DEBUG: 9377 -- puppet_tags neutron_config,neutron_metadata_agent_config", > "2018-07-14 00:53:40,892 DEBUG: 9377 -- manifest include tripleo::profile::base::neutron::metadata", > "2018-07-14 00:53:40,893 DEBUG: 9377 -- config_volume neutron", > "2018-07-14 00:53:40,893 DEBUG: 9377 -- puppet_tags neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-07-14 00:53:40,893 DEBUG: 9377 -- manifest include ::tripleo::profile::base::neutron::ovs", > "2018-07-14 00:53:40,893 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", > "2018-07-14 00:53:40,893 DEBUG: 9377 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-07-14 00:53:40,893 DEBUG: 9377 -- Existing service, appending puppet tags and manifest", > "2018-07-14 00:53:40,893 DEBUG: 9377 -- config_volume nova", > "2018-07-14 00:53:40,893 DEBUG: 9377 -- puppet_tags nova_config", > "2018-07-14 00:53:40,893 DEBUG: 9377 -- manifest ['Nova_cell_v2'].each |String $val| { noop_resource($val) }", > "include tripleo::profile::base::nova::api", > "2018-07-14 00:53:40,893 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3", > "2018-07-14 00:53:40,893 DEBUG: 9377 -- volumes []", > "2018-07-14 00:53:40,893 DEBUG: 9377 -- Adding new service", > "2018-07-14 00:53:40,893 DEBUG: 9377 -- manifest include tripleo::profile::base::nova::conductor", > "2018-07-14 00:53:40,895 DEBUG: 9377 -- config_volume nova", > "2018-07-14 00:53:40,895 DEBUG: 9377 -- puppet_tags nova_config", > "2018-07-14 00:53:40,895 DEBUG: 9377 -- manifest include tripleo::profile::base::nova::consoleauth", > "2018-07-14 00:53:40,895 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3", > "2018-07-14 00:53:40,895 DEBUG: 9377 -- volumes []", > "2018-07-14 00:53:40,895 DEBUG: 9377 -- Existing service, appending puppet tags and manifest", > "2018-07-14 00:53:40,895 DEBUG: 9377 -- config_volume nova_placement", > "2018-07-14 00:53:40,895 DEBUG: 9377 -- manifest include tripleo::profile::base::nova::placement", > "2018-07-14 00:53:40,895 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-07-13.3", > "2018-07-14 00:53:40,895 DEBUG: 9377 -- Adding new service", > "2018-07-14 00:53:40,895 DEBUG: 9377 -- manifest include tripleo::profile::base::nova::scheduler", > "2018-07-14 00:53:40,896 DEBUG: 9377 -- manifest include tripleo::profile::base::nova::vncproxy", > "2018-07-14 00:53:40,896 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3", > "2018-07-14 00:53:40,896 DEBUG: 9377 -- volumes []", > "2018-07-14 00:53:40,896 DEBUG: 9377 -- Existing service, appending puppet tags and manifest", > "2018-07-14 00:53:40,896 DEBUG: 9377 -- config_volume crond", > "2018-07-14 00:53:40,896 DEBUG: 9377 -- puppet_tags ", > "2018-07-14 00:53:40,896 DEBUG: 9377 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-07-14 00:53:40,896 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", > "2018-07-14 00:53:40,896 DEBUG: 9377 -- Adding new service", > "2018-07-14 00:53:40,896 DEBUG: 9377 -- config_volume panko", > "2018-07-14 00:53:40,896 DEBUG: 9377 -- puppet_tags panko_api_paste_ini,panko_config", > "2018-07-14 00:53:40,896 DEBUG: 9377 -- manifest include tripleo::profile::base::panko::api", > "2018-07-14 00:53:40,896 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-07-13.3", > "2018-07-14 00:53:40,896 DEBUG: 9377 -- config_volume rabbitmq", > "2018-07-14 00:53:40,896 DEBUG: 9377 -- puppet_tags file", > "2018-07-14 00:53:40,896 DEBUG: 9377 -- manifest ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::base::rabbitmq", > "2018-07-14 00:53:40,896 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-07-13.3", > "2018-07-14 00:53:40,897 DEBUG: 9377 -- Adding new service", > "2018-07-14 00:53:40,897 DEBUG: 9377 -- config_volume redis", > "2018-07-14 00:53:40,897 DEBUG: 9377 -- puppet_tags exec", > "2018-07-14 00:53:40,897 DEBUG: 9377 -- manifest include ::tripleo::profile::pacemaker::database::redis_bundle", > "2018-07-14 00:53:40,897 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-redis:2018-07-13.3", > "2018-07-14 00:53:40,897 DEBUG: 9377 -- volumes []", > "2018-07-14 00:53:40,897 DEBUG: 9377 -- config_volume sahara", > "2018-07-14 00:53:40,897 DEBUG: 9377 -- puppet_tags sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-07-14 00:53:40,897 DEBUG: 9377 -- manifest include ::tripleo::profile::base::sahara::api", > "2018-07-14 00:53:40,897 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-07-13.3", > "2018-07-14 00:53:40,897 DEBUG: 9377 -- puppet_tags sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-07-14 00:53:40,897 DEBUG: 9377 -- manifest include ::tripleo::profile::base::sahara::engine", > "2018-07-14 00:53:40,897 DEBUG: 9377 -- Existing service, appending puppet tags and manifest", > "2018-07-14 00:53:40,897 DEBUG: 9377 -- config_volume swift", > "2018-07-14 00:53:40,897 DEBUG: 9377 -- puppet_tags swift_config,swift_proxy_config,swift_keymaster_config", > "2018-07-14 00:53:40,898 DEBUG: 9377 -- manifest include ::tripleo::profile::base::swift::proxy", > "2018-07-14 00:53:40,898 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3", > "2018-07-14 00:53:40,898 DEBUG: 9377 -- volumes []", > "2018-07-14 00:53:40,898 DEBUG: 9377 -- Adding new service", > "2018-07-14 00:53:40,898 DEBUG: 9377 -- config_volume swift_ringbuilder", > "2018-07-14 00:53:40,898 DEBUG: 9377 -- puppet_tags exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball", > "2018-07-14 00:53:40,898 DEBUG: 9377 -- manifest include ::tripleo::profile::base::swift::ringbuilder", > "2018-07-14 00:53:40,899 DEBUG: 9377 -- config_volume swift", > "2018-07-14 00:53:40,899 DEBUG: 9377 -- puppet_tags swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server", > "2018-07-14 00:53:40,899 DEBUG: 9377 -- manifest include ::tripleo::profile::base::swift::storage", > "class xinetd() {}", > "2018-07-14 00:53:40,899 DEBUG: 9377 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3", > "2018-07-14 00:53:40,899 DEBUG: 9377 -- volumes []", > "2018-07-14 00:53:40,899 DEBUG: 9377 -- Existing service, appending puppet tags and manifest", > "2018-07-14 00:53:40,899 INFO: 9377 -- Service compilation completed.", > "2018-07-14 00:53:40,900 DEBUG: 9377 -- - [u'nova_placement', u'file,file_line,concat,augeas,cron,nova_config', u'include tripleo::profile::base::nova::placement\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-07-13.3', []]", > "2018-07-14 00:53:40,900 DEBUG: 9377 -- - [u'aodh', u'file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config', u'include tripleo::profile::base::aodh::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::evaluator\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::listener\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::notifier\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-07-13.3', []]", > "2018-07-14 00:53:40,900 DEBUG: 9377 -- - [u'heat_api', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::api\\n', u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-07-13.3', []]", > "2018-07-14 00:53:40,900 DEBUG: 9377 -- - [u'swift_ringbuilder', u'file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball', u'include ::tripleo::profile::base::swift::ringbuilder', u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3', []]", > "2018-07-14 00:53:40,900 DEBUG: 9377 -- - [u'sahara', u'file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template', u'include ::tripleo::profile::base::sahara::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::sahara::engine\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-07-13.3', []]", > "2018-07-14 00:53:40,900 DEBUG: 9377 -- - [u'mysql', u'file,file_line,concat,augeas,cron,file', u\"['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }\\nexec {'wait-for-settle': command => '/bin/true' }\\ninclude ::tripleo::profile::pacemaker::database::mysql_bundle\", u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3', []]", > "2018-07-14 00:53:40,900 DEBUG: 9377 -- - [u'gnocchi', u'file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config', u'include ::tripleo::profile::base::gnocchi::api\\n\\ninclude ::tripleo::profile::base::gnocchi::metricd\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::gnocchi::statsd\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-07-13.3', []]", > "2018-07-14 00:53:40,900 DEBUG: 9377 -- - [u'clustercheck', u'file,file_line,concat,augeas,cron,file', u'include ::tripleo::profile::pacemaker::clustercheck', u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3', []]", > "2018-07-14 00:53:40,900 DEBUG: 9377 -- - [u'redis', u'file,file_line,concat,augeas,cron,exec', u'include ::tripleo::profile::pacemaker::database::redis_bundle', u'192.168.24.1:8787/rhosp14/openstack-redis:2018-07-13.3', []]", > "2018-07-14 00:53:40,900 DEBUG: 9377 -- - [u'nova', u'file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config', u\"['Nova_cell_v2'].each |String $val| { noop_resource($val) }\\ninclude tripleo::profile::base::nova::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::conductor\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::consoleauth\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::scheduler\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::vncproxy\\n\\ninclude ::tripleo::profile::base::database::mysql::client\", u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3', []]", > "2018-07-14 00:53:40,900 DEBUG: 9377 -- - [u'iscsid', u'file,file_line,concat,augeas,cron,iscsid_config', u'include ::tripleo::profile::base::iscsid', u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-07-13.3', [u'/etc/iscsi:/etc/iscsi']]", > "2018-07-14 00:53:40,900 DEBUG: 9377 -- - [u'glance_api', u'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config', u'include ::tripleo::profile::base::glance::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-07-13.3', []]", > "2018-07-14 00:53:40,900 DEBUG: 9377 -- - [u'keystone', u'file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config', u\"['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::base::keystone\\n\\ninclude ::tripleo::profile::base::database::mysql::client\", u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-07-13.3', []]", > "2018-07-14 00:53:40,900 DEBUG: 9377 -- - [u'memcached', u'file,file_line,concat,augeas,cron,file', u'include ::tripleo::profile::base::memcached\\n', u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-07-13.3', []]", > "2018-07-14 00:53:40,900 DEBUG: 9377 -- - [u'panko', u'file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config', u'include tripleo::profile::base::panko::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-07-13.3', []]", > "2018-07-14 00:53:40,900 DEBUG: 9377 -- - [u'heat', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::engine\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-07-13.3', []]", > "2018-07-14 00:53:40,900 DEBUG: 9377 -- - [u'cinder', u'file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line', u'include ::tripleo::profile::base::cinder::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::cinder::backup::ceph\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::cinder::scheduler\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::lvm\\ninclude ::tripleo::profile::base::cinder::volume\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-07-13.3', []]", > "2018-07-14 00:53:40,901 DEBUG: 9377 -- - [u'swift', u'file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server', u'include ::tripleo::profile::base::swift::proxy\\n\\ninclude ::tripleo::profile::base::swift::storage\\n\\nclass xinetd() {}', u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3', []]", > "2018-07-14 00:53:40,901 DEBUG: 9377 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3', []]", > "2018-07-14 00:53:40,901 DEBUG: 9377 -- - [u'haproxy', u'file,file_line,concat,augeas,cron,haproxy_config', u\"exec {'wait-for-settle': command => '/bin/true' }\\nclass tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}\\n['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::pacemaker::haproxy_bundle\", u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-07-13.3', [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']]", > "2018-07-14 00:53:40,901 DEBUG: 9377 -- - [u'ceilometer', u'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', u'include ::tripleo::profile::base::ceilometer::agent::polling\\n\\ninclude ::tripleo::profile::base::ceilometer::agent::notification\\n', u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3', []]", > "2018-07-14 00:53:40,901 DEBUG: 9377 -- - [u'rabbitmq', u'file,file_line,concat,augeas,cron,file', u\"['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::base::rabbitmq\\n\", u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-07-13.3', []]", > "2018-07-14 00:53:40,901 DEBUG: 9377 -- - [u'neutron', u'file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2', u'include tripleo::profile::base::neutron::server\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::neutron::plugins::ml2\\n\\ninclude tripleo::profile::base::neutron::dhcp\\n\\ninclude tripleo::profile::base::neutron::l3\\n\\ninclude tripleo::profile::base::neutron::metadata\\n\\ninclude ::tripleo::profile::base::neutron::ovs\\n', u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3', [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']]", > "2018-07-14 00:53:40,901 DEBUG: 9377 -- - [u'horizon', u'file,file_line,concat,augeas,cron,horizon_config', u'include ::tripleo::profile::base::horizon\\n', u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-07-13.3', []]", > "2018-07-14 00:53:40,901 DEBUG: 9377 -- - [u'heat_api_cfn', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::api_cfn\\n', u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-07-13.3', []]", > "2018-07-14 00:53:40,901 INFO: 9377 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-07-14 00:53:40,914 INFO: 9378 -- Starting configuration of nova_placement using image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-07-13.3", > "2018-07-14 00:53:40,914 DEBUG: 9378 -- config_volume nova_placement", > "2018-07-14 00:53:40,915 DEBUG: 9378 -- puppet_tags file,file_line,concat,augeas,cron,nova_config", > "2018-07-14 00:53:40,915 DEBUG: 9378 -- manifest include tripleo::profile::base::nova::placement", > "2018-07-14 00:53:40,915 DEBUG: 9378 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-07-13.3", > "2018-07-14 00:53:40,915 DEBUG: 9378 -- volumes []", > "2018-07-14 00:53:40,915 INFO: 9379 -- Starting configuration of swift_ringbuilder using image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3", > "2018-07-14 00:53:40,915 INFO: 9380 -- Starting configuration of gnocchi using image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-07-13.3", > "2018-07-14 00:53:40,916 DEBUG: 9380 -- config_volume gnocchi", > "2018-07-14 00:53:40,916 DEBUG: 9379 -- config_volume swift_ringbuilder", > "2018-07-14 00:53:40,916 INFO: 9378 -- Removing container: docker-puppet-nova_placement", > "2018-07-14 00:53:40,916 DEBUG: 9380 -- puppet_tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config", > "2018-07-14 00:53:40,916 DEBUG: 9380 -- manifest include ::tripleo::profile::base::gnocchi::api", > "include ::tripleo::profile::base::gnocchi::metricd", > "include ::tripleo::profile::base::gnocchi::statsd", > "2018-07-14 00:53:40,916 DEBUG: 9379 -- puppet_tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball", > "2018-07-14 00:53:40,916 DEBUG: 9380 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-07-13.3", > "2018-07-14 00:53:40,916 DEBUG: 9380 -- volumes []", > "2018-07-14 00:53:40,916 DEBUG: 9379 -- manifest include ::tripleo::profile::base::swift::ringbuilder", > "2018-07-14 00:53:40,916 DEBUG: 9379 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3", > "2018-07-14 00:53:40,916 DEBUG: 9379 -- volumes []", > "2018-07-14 00:53:40,918 INFO: 9380 -- Removing container: docker-puppet-gnocchi", > "2018-07-14 00:53:40,918 INFO: 9379 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-07-14 00:53:41,009 INFO: 9380 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-07-13.3", > "2018-07-14 00:53:41,009 INFO: 9379 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3", > "2018-07-14 00:53:41,014 INFO: 9378 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-07-13.3", > "2018-07-14 00:54:00,471 DEBUG: 9379 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server", > "d02c3bd49e78: Pulling fs layer", > "475b0168c252: Pulling fs layer", > "98a4cb0b02ef: Pulling fs layer", > "1b52dc9b90b4: Pulling fs layer", > "c921f28045d7: Pulling fs layer", > "19bf50c45801: Pulling fs layer", > "c921f28045d7: Waiting", > "19bf50c45801: Waiting", > "1b52dc9b90b4: Waiting", > "475b0168c252: Verifying Checksum", > "475b0168c252: Download complete", > "1b52dc9b90b4: Verifying Checksum", > "1b52dc9b90b4: Download complete", > "c921f28045d7: Verifying Checksum", > "c921f28045d7: Download complete", > "19bf50c45801: Verifying Checksum", > "19bf50c45801: Download complete", > "98a4cb0b02ef: Verifying Checksum", > "98a4cb0b02ef: Download complete", > "d02c3bd49e78: Verifying Checksum", > "d02c3bd49e78: Download complete", > "d02c3bd49e78: Pull complete", > "475b0168c252: Pull complete", > "98a4cb0b02ef: Pull complete", > "1b52dc9b90b4: Pull complete", > "c921f28045d7: Pull complete", > "19bf50c45801: Pull complete", > "Digest: sha256:0374e8b9d21c56e32a8c1062b22e22811371401eaea0fa5e39ef2f64bbd16e34", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3", > "2018-07-14 00:54:00,475 DEBUG: 9379 -- NET_HOST enabled", > "2018-07-14 00:54:00,475 DEBUG: 9379 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-swift_ringbuilder --env PUPPET_TAGS=file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball --env NAME=swift_ringbuilder --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpDcyz_d:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3", > "2018-07-14 00:54:05,765 DEBUG: 9378 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-placement-api ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-placement-api", > "896eb5edb180: Pulling fs layer", > "629702997d30: Pulling fs layer", > "629702997d30: Waiting", > "629702997d30: Verifying Checksum", > "629702997d30: Download complete", > "896eb5edb180: Verifying Checksum", > "896eb5edb180: Download complete", > "896eb5edb180: Pull complete", > "629702997d30: Pull complete", > "Digest: sha256:3638d861b88ff5235a0e73e316b34a951a660604bf99fd8bc75f253cc3d115f5", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-07-13.3", > "2018-07-14 00:54:05,770 DEBUG: 9378 -- NET_HOST enabled", > "2018-07-14 00:54:05,770 DEBUG: 9378 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova_placement --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config --env NAME=nova_placement --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpx4ahIJ:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-07-13.3", > "2018-07-14 00:54:09,035 DEBUG: 9380 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-gnocchi-api ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-gnocchi-api", > "df9846762800: Pulling fs layer", > "7aeb7e4c1dce: Pulling fs layer", > "df9846762800: Waiting", > "7aeb7e4c1dce: Verifying Checksum", > "7aeb7e4c1dce: Download complete", > "df9846762800: Verifying Checksum", > "df9846762800: Download complete", > "df9846762800: Pull complete", > "7aeb7e4c1dce: Pull complete", > "Digest: sha256:36e48e859a0a40d821a1820e9804f818d93caaad7851ebf6e198c1c57a902da2", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-07-13.3", > "2018-07-14 00:54:09,039 DEBUG: 9380 -- NET_HOST enabled", > "2018-07-14 00:54:09,039 DEBUG: 9380 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-gnocchi --env PUPPET_TAGS=file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config --env NAME=gnocchi --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpsFB3js:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-07-13.3", > "2018-07-14 00:54:15,361 DEBUG: 9379 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.24 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[fetch_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[extract_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[extract_swift_ring_tarball]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Swift/File[/var/lib/swift]/group: group changed 'root' to 'swift'", > "Notice: /Stage[main]/Swift/File[/etc/swift/swift.conf]/owner: owner changed 'root' to 'swift'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[object]/Exec[create_object]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[account]/Exec[create_account]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[container]/Exec[create_container]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.18:%PORT%/d1]/Ring_object_device[172.17.4.18:6000/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.18:%PORT%/d1]/Ring_container_device[172.17.4.18:6001/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.18:%PORT%/d1]/Ring_account_device[172.17.4.18:6002/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[object]/Exec[rebalance_object]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[account]/Exec[rebalance_account]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[container]/Exec[rebalance_container]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[create_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[create_swift_ring_tarball]: Triggered 'refresh' from 3 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[upload_swift_ring_tarball]: Triggered 'refresh' from 2 events", > "Notice: Applied catalog in 4.82 seconds", > "Changes:", > " Total: 11", > "Events:", > " Success: 11", > "Resources:", > " Changed: 11", > " Out of sync: 11", > " Skipped: 19", > " Total: 36", > " Restarted: 6", > "Time:", > " File: 0.00", > " Ring account device: 0.56", > " Ring object device: 0.59", > " Ring container device: 0.59", > " Config retrieval: 1.38", > " Exec: 1.54", > " Last run: 1531529654", > " Total: 4.66", > "Version:", > " Config: 1531529648", > " Puppet: 4.8.2", > "Gathering files modified after 2018-07-14 00:54:00.771840088 +0000", > "2018-07-14 00:54:15,362 DEBUG: 9379 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball'", > "+ origin_of_time=/var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ touch /var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=controller-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball /etc/config.pp", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "Warning: ModuleLoader: module 'swift' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/ringbuilder.pp\", 113]:[\"/etc/config.pp\", 2]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/ringbuilder/create.pp\", 44]:", > "Warning: Unexpected line: Ring file /etc/swift/object.ring.gz not found, probably it hasn't been written yet", > "Warning: Unexpected line: Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta", > "Warning: Unexpected line: There are no devices in this ring, or all devices have been deleted", > "Warning: Unexpected line: Ring file /etc/swift/container.ring.gz not found, probably it hasn't been written yet", > "Warning: Unexpected line: Ring file /etc/swift/account.ring.gz not found, probably it hasn't been written yet", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ rsync_srcs+=' /var/www'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/swift_ringbuilder", > "++ stat -c %y /var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:54:00.771840088 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/swift_ringbuilder", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/swift_ringbuilder", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/swift_ringbuilder.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/swift_ringbuilder --mtime=1970-01-01", > "+ md5sum", > "+ awk '{print $1}'", > "tar: Removing leading `/' from member names", > "+ tar -c -f - /var/lib/config-data/puppet-generated/swift_ringbuilder --mtime=1970-01-01", > "2018-07-14 00:54:15,362 INFO: 9379 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-07-14 00:54:15,424 DEBUG: 9379 -- docker-puppet-swift_ringbuilder", > "2018-07-14 00:54:15,424 INFO: 9379 -- Finished processing puppet configs for swift_ringbuilder", > "2018-07-14 00:54:15,425 INFO: 9379 -- Starting configuration of sahara using image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-07-13.3", > "2018-07-14 00:54:15,426 DEBUG: 9379 -- config_volume sahara", > "2018-07-14 00:54:15,426 DEBUG: 9379 -- puppet_tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-07-14 00:54:15,426 DEBUG: 9379 -- manifest include ::tripleo::profile::base::sahara::api", > "include ::tripleo::profile::base::sahara::engine", > "2018-07-14 00:54:15,426 DEBUG: 9379 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-07-13.3", > "2018-07-14 00:54:15,426 DEBUG: 9379 -- volumes []", > "2018-07-14 00:54:15,426 INFO: 9379 -- Removing container: docker-puppet-sahara", > "2018-07-14 00:54:15,501 INFO: 9379 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-07-13.3", > "2018-07-14 00:54:18,021 DEBUG: 9379 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-sahara-api ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-sahara-api", > "d02c3bd49e78: Already exists", > "475b0168c252: Already exists", > "98a4cb0b02ef: Already exists", > "1b52dc9b90b4: Already exists", > "752660c85bf7: Pulling fs layer", > "36450e410d02: Pulling fs layer", > "36450e410d02: Verifying Checksum", > "36450e410d02: Download complete", > "752660c85bf7: Download complete", > "752660c85bf7: Pull complete", > "36450e410d02: Pull complete", > "Digest: sha256:e1e8337a0c7c2ccd0d9e85fd8cc93dd1116225f32f44099dccdc28ca2957145a", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-07-13.3", > "2018-07-14 00:54:18,024 DEBUG: 9379 -- NET_HOST enabled", > "2018-07-14 00:54:18,024 DEBUG: 9379 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-sahara --env PUPPET_TAGS=file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template --env NAME=sahara --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpYkikWJ:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-07-13.3", > "2018-07-14 00:54:21,697 DEBUG: 9380 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.32 seconds", > "Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{md5}9da85e58f3bd6c780ce76db603b7f028'", > "Notice: /Stage[main]/Apache::Mod::Mime_magic/File[mime_magic.conf]/ensure: defined content as '{md5}b258529b332429e2ff8344f726a95457'", > "Notice: /Stage[main]/Apache::Mod::Alias/File[alias.conf]/ensure: defined content as '{md5}983e865be85f5e0daaed7433db82995e'", > "Notice: /Stage[main]/Apache::Mod::Autoindex/File[autoindex.conf]/ensure: defined content as '{md5}2421a3c6df32c7e38c2a7a22afdf5728'", > "Notice: /Stage[main]/Apache::Mod::Deflate/File[deflate.conf]/ensure: defined content as '{md5}a045d750d819b1e9dae3fbfb3f20edd5'", > "Notice: /Stage[main]/Apache::Mod::Dir/File[dir.conf]/ensure: defined content as '{md5}c741d8ea840e6eb999d739eed47c69d7'", > "Notice: /Stage[main]/Apache::Mod::Negotiation/File[negotiation.conf]/ensure: defined content as '{md5}47284b5580b986a6ba32580b6ffb9fd7'", > "Notice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{md5}c7ede4173da1915b7ec088201f030c28'", > "Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{md5}f58b0483b70b4e73b5f67ff37b8f24a0'", > "Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{md5}fa95c477a2085c1f7f17ee5f8eccfb90'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Gnocchi::Db/Gnocchi_config[indexer/url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Gnocchi_config[api/max_limit]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Gnocchi_config[api/auth_mode]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage/Gnocchi_config[storage/coordination_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Incoming::Redis/Gnocchi_config[incoming/driver]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Incoming::Redis/Gnocchi_config[incoming/redis_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/driver]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_username]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_keyring]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_pool]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_conffile]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Metricd/Gnocchi_config[metricd/workers]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Metricd/Gnocchi_config[metricd/metric_processing_delay]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/resource_id]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/archive_policy_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/flush_delay]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Logging/Oslo::Log[gnocchi_config]/Gnocchi_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Logging/Oslo::Log[gnocchi_config]/Gnocchi_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Policy/Oslo::Policy[gnocchi_config]/Gnocchi_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Oslo::Middleware[gnocchi_config]/Gnocchi_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}1eb599159f763831cf9410bec2f26508'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{md5}c6d1bc1fdbcb93bbd2596e4703f4108c' to '{md5}ac42062d69afa9d2671492ce0be87b7b'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{md5}785d35cb285e190d589163b45263ca89'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{md5}26e5d44aae258b3e9d821cbbbd3e2826'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{md5}0e8468ecc1265f8947b8725f4d1be9c0'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{md5}d1045f54d2798499ca0f030ca0eef920'", > "Notice: /Stage[main]/Apache::Mod::Actions/Apache::Mod[actions]/File[actions.load]/ensure: defined content as '{md5}599866dfaf734f60f7e2d41ee8235515'", > "Notice: /Stage[main]/Apache::Mod::Authn_core/Apache::Mod[authn_core]/File[authn_core.load]/ensure: defined content as '{md5}704d6e8b02b0eca0eba4083960d16c52'", > "Notice: /Stage[main]/Apache::Mod::Cache/Apache::Mod[cache]/File[cache.load]/ensure: defined content as '{md5}01e4d392225b518a65b0f7d6c4e21d29'", > "Notice: /Stage[main]/Apache::Mod::Ext_filter/Apache::Mod[ext_filter]/File[ext_filter.load]/ensure: defined content as '{md5}76d5e0ac3411a4be57ac33ebe2e52ac8'", > "Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{md5}e36257b9efab01459141d423cae57c7c'", > "Notice: /Stage[main]/Apache::Mod::Mime_magic/Apache::Mod[mime_magic]/File[mime_magic.load]/ensure: defined content as '{md5}cb8670bb2fb352aac7ebf3a85d52094c'", > "Notice: /Stage[main]/Apache::Mod::Rewrite/Apache::Mod[rewrite]/File[rewrite.load]/ensure: defined content as '{md5}26e2683352fc1599f29573ff0d934e79'", > "Notice: /Stage[main]/Apache::Mod::Speling/Apache::Mod[speling]/File[speling.load]/ensure: defined content as '{md5}f82e9e6b871a276c324c9eeffcec8a61'", > "Notice: /Stage[main]/Apache::Mod::Suexec/Apache::Mod[suexec]/File[suexec.load]/ensure: defined content as '{md5}c7d5c61c534ba423a79b0ae78ff9be35'", > "Notice: /Stage[main]/Apache::Mod::Version/Apache::Mod[version]/File[version.load]/ensure: defined content as '{md5}1c9243de22ace4dc8266442c48ae0c92'", > "Notice: /Stage[main]/Apache::Mod::Vhost_alias/Apache::Mod[vhost_alias]/File[vhost_alias.load]/ensure: defined content as '{md5}eca907865997d50d5130497665c3f82e'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_digest]/File[auth_digest.load]/ensure: defined content as '{md5}df9e85f8da0b239fe8e698ae7ead4f60'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_anon]/File[authn_anon.load]/ensure: defined content as '{md5}bf57b94b5aec35476fc2a2dc3861f132'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_dbm]/File[authn_dbm.load]/ensure: defined content as '{md5}90ee8f8ef1a017cacadfda4225e10651'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_dbm]/File[authz_dbm.load]/ensure: defined content as '{md5}c1363277984d22f99b70f7dce8753b60'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_owner]/File[authz_owner.load]/ensure: defined content as '{md5}f30a9be1016df87f195449d9e02d1857'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[expires]/File[expires.load]/ensure: defined content as '{md5}f0825bad1e470de86ffabeb86dcc5d95'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[include]/File[include.load]/ensure: defined content as '{md5}88095a914eedc3c2c184dd5d74c3954c'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[logio]/File[logio.load]/ensure: defined content as '{md5}084533c7a44e9129d0e6df952e2472b6'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[substitute]/File[substitute.load]/ensure: defined content as '{md5}8077c34a71afcf41c8fc644830935915'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[usertrack]/File[usertrack.load]/ensure: defined content as '{md5}e95fbbf030fabec98b948f8dc217775c'", > "Notice: /Stage[main]/Apache::Mod::Alias/Apache::Mod[alias]/File[alias.load]/ensure: defined content as '{md5}3cf2fa309ccae4c29a4b875d0894cd79'", > "Notice: /Stage[main]/Apache::Mod::Authn_file/Apache::Mod[authn_file]/File[authn_file.load]/ensure: defined content as '{md5}d41656680003d7b890267bb73621c60b'", > "Notice: /Stage[main]/Apache::Mod::Autoindex/Apache::Mod[autoindex]/File[autoindex.load]/ensure: defined content as '{md5}515cdf5b573e961a60d2931d39248648'", > "Notice: /Stage[main]/Apache::Mod::Dav/Apache::Mod[dav]/File[dav.load]/ensure: defined content as '{md5}588e496251838c4840c14b28b5aa7881'", > "Notice: /Stage[main]/Apache::Mod::Dav_fs/File[dav_fs.conf]/ensure: defined content as '{md5}899a57534f3d84efa81887ec93c90c9b'", > "Notice: /Stage[main]/Apache::Mod::Dav_fs/Apache::Mod[dav_fs]/File[dav_fs.load]/ensure: defined content as '{md5}2996277c73b1cd684a9a3111c355e0d3'", > "Notice: /Stage[main]/Apache::Mod::Deflate/Apache::Mod[deflate]/File[deflate.load]/ensure: defined content as '{md5}2d1a1afcae0c70557251829a8586eeaf'", > "Notice: /Stage[main]/Apache::Mod::Dir/Apache::Mod[dir]/File[dir.load]/ensure: defined content as '{md5}1bfb1c2a46d7351fc9eb47c659dee068'", > "Notice: /Stage[main]/Apache::Mod::Negotiation/Apache::Mod[negotiation]/File[negotiation.load]/ensure: defined content as '{md5}d262ee6a5f20d9dd7f87770638dc2ccd'", > "Notice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{md5}ec6c99f7cc8e35bdbcf8028f652c9f6d'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_basic]/File[auth_basic.load]/ensure: defined content as '{md5}494bcf4b843f7908675d663d8dc1bdc8'", > "Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{md5}66a1e2064a140c3e7dca7ac33877700e'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{md5}39942569bff2abdb259f9a347c7246bc'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[access_compat]/File[access_compat.load]/ensure: defined content as '{md5}d5feb88bec4570e2dbc41cce7e0de003'", > "Notice: /Stage[main]/Apache::Mod::Authz_user/Apache::Mod[authz_user]/File[authz_user.load]/ensure: defined content as '{md5}63594303ee808423679b1ea13dd5a784'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_groupfile]/File[authz_groupfile.load]/ensure: defined content as '{md5}ae005a36b3ac8c20af36c434561c8a75'", > "Notice: /Stage[main]/Apache::Mod::Env/Apache::Mod[env]/File[env.load]/ensure: defined content as '{md5}d74184d40d0ee24ba02626a188ee7e1a'", > "Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{md5}157529aafcf03fa491bc924103e4608e'", > "Notice: /Stage[main]/Apache::Mod::Cgi/Apache::Mod[cgi]/File[cgi.load]/ensure: defined content as '{md5}ac20c5c5779b37ab06b480d6485a0881'", > "Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{md5}c7726ef20347ef9a06ef68eeaad79765'", > "Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{md5}e282ac9f82fe5538692a4de3616fb695'", > "Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{md5}ab31a6ea611785f74851b578572e4157'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Apache/Systemd::Dropin_file[httpd.conf]/File[/etc/systemd/system/httpd.service.d]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Apache/Systemd::Dropin_file[httpd.conf]/File[/etc/systemd/system/httpd.service.d/httpd.conf]/ensure: defined content as '{md5}c44e90292b030f86c3b82096b68fe9cc'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed", > "Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/content: content changed '{md5}9e163ce201541f8aa36fcc1a372ed34d' to '{md5}b6f6f2773db25c777f1db887e7a3f57d'", > "Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{md5}8b3feb3fc2563de439920bb2c52cbd11'", > "Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{md5}e1795e051e7aae1f865fde0d3b86a507'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-lua.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi.conf]/ensure: removed", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/File[/var/www/cgi-bin/gnocchi]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/File[gnocchi_wsgi]/ensure: defined content as '{md5}c03530dd30d25ec70b705e0c2f43df7a'", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/Apache::Vhost[gnocchi_wsgi]/Concat[10-gnocchi_wsgi.conf]/File[/etc/httpd/conf.d/10-gnocchi_wsgi.conf]/ensure: defined content as '{md5}5f4e9827f4ea458d32aae3eddb85a8c3'", > "Notice: Applied catalog in 1.15 seconds", > " Total: 110", > " Success: 110", > " Changed: 110", > " Out of sync: 110", > " Total: 260", > " Skipped: 43", > " Concat file: 0.00", > " Anchor: 0.00", > " Concat fragment: 0.00", > " Augeas: 0.02", > " Gnocchi config: 0.29", > " File: 0.37", > " Last run: 1531529660", > " Config retrieval: 4.92", > " Total: 5.61", > " Resources: 0.00", > " Config: 1531529654", > "Gathering files modified after 2018-07-14 00:54:09.241804361 +0000", > "2018-07-14 00:54:21,698 DEBUG: 9380 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config'", > "+ origin_of_time=/var/lib/config-data/gnocchi.origin_of_time", > "+ touch /var/lib/config-data/gnocchi.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config /etc/config.pp", > "Warning: ModuleLoader: module 'gnocchi' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/gnocchi/manifests/db.pp\", 26]:[\"/etc/puppet/modules/gnocchi/manifests/init.pp\", 54]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/gnocchi/manifests/config.pp\", 29]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/gnocchi.pp\", 31]", > "Warning: Scope(Class[Gnocchi::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: ModuleLoader: module 'oslo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'keystone' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'openstacklib' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/gnocchi", > "++ stat -c %y /var/lib/config-data/gnocchi.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:54:09.241804361 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/gnocchi", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/gnocchi", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/gnocchi.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/gnocchi --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/gnocchi --mtime=1970-01-01", > "2018-07-14 00:54:21,698 INFO: 9380 -- Removing container: docker-puppet-gnocchi", > "2018-07-14 00:54:21,744 DEBUG: 9380 -- docker-puppet-gnocchi", > "2018-07-14 00:54:21,745 INFO: 9380 -- Finished processing puppet configs for gnocchi", > "2018-07-14 00:54:21,745 INFO: 9380 -- Starting configuration of clustercheck using image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3", > "2018-07-14 00:54:21,745 DEBUG: 9380 -- config_volume clustercheck", > "2018-07-14 00:54:21,745 DEBUG: 9380 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-07-14 00:54:21,745 DEBUG: 9380 -- manifest include ::tripleo::profile::pacemaker::clustercheck", > "2018-07-14 00:54:21,745 DEBUG: 9380 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3", > "2018-07-14 00:54:21,745 DEBUG: 9380 -- volumes []", > "2018-07-14 00:54:21,746 INFO: 9380 -- Removing container: docker-puppet-clustercheck", > "2018-07-14 00:54:21,816 INFO: 9380 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3", > "2018-07-14 00:54:25,293 DEBUG: 9378 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.41 seconds", > "Notice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/image_service]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[cinder/catalog_info]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[os_vif_linux_bridge/use_ipv6]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_api_faults]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/os_interface]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/memcached_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}85a5998ac6cd1fd4cab06d9347a16020'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/File[/etc/httpd/conf.d/00-nova-placement-api.conf]/content: content changed '{md5}611e31d39e1635bfabc0aafc51b43d0b' to '{md5}612d455490cfecc4b51db6656ea39240'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[placement_wsgi]/ensure: defined content as '{md5}2c992c50344eb1765282cb9fb70126db'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/Apache::Vhost[placement_wsgi]/Concat[10-placement_wsgi.conf]/File[/etc/httpd/conf.d/10-placement_wsgi.conf]/ensure: defined content as '{md5}7046d2fb7a13b6a8338054129a3fc24b'", > "Notice: Applied catalog in 7.48 seconds", > " Total: 132", > " Success: 132", > " Changed: 132", > " Out of sync: 132", > " Total: 371", > " Skipped: 39", > " Package: 0.10", > " Total: 11.85", > " Last run: 1531529663", > " Config retrieval: 5.01", > " Nova config: 6.34", > " Config: 1531529650", > "Gathering files modified after 2018-07-14 00:54:05.987817945 +0000", > "2018-07-14 00:54:25,293 DEBUG: 9378 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config'", > "+ origin_of_time=/var/lib/config-data/nova_placement.origin_of_time", > "+ touch /var/lib/config-data/nova_placement.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config /etc/config.pp", > "ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/placement.pp\", 62]", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/config.pp\", 37]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 114]", > "Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release", > "Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/db.pp\", 126]:[\"/etc/puppet/modules/nova/manifests/init.pp\", 530]", > "Warning: Scope(Class[Nova]): nova::use_syslog, nova::use_stderr, nova::log_facility, nova::log_dir \\", > "and nova::debug is deprecated and has been moved to nova::logging class, please set them there.", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 540]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/placement.pp\", 62]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/placement.pp\", 101]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 138]", > "Warning: Scope(Class[Nova::Placement]): The os_region_name parameter is deprecated and will be removed \\", > "in a future release. Please use region_name instead.", > "Warning: Scope(Class[Nova::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/nova_placement", > "++ stat -c %y /var/lib/config-data/nova_placement.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:54:05.987817945 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova_placement", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova_placement", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/nova_placement.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/nova_placement --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/nova_placement --mtime=1970-01-01", > "2018-07-14 00:54:25,293 INFO: 9378 -- Removing container: docker-puppet-nova_placement", > "2018-07-14 00:54:25,348 DEBUG: 9378 -- docker-puppet-nova_placement", > "2018-07-14 00:54:25,348 INFO: 9378 -- Finished processing puppet configs for nova_placement", > "2018-07-14 00:54:25,348 INFO: 9378 -- Starting configuration of aodh using image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-07-13.3", > "2018-07-14 00:54:25,349 DEBUG: 9378 -- config_volume aodh", > "2018-07-14 00:54:25,349 DEBUG: 9378 -- puppet_tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config", > "2018-07-14 00:54:25,349 DEBUG: 9378 -- manifest include tripleo::profile::base::aodh::api", > "include tripleo::profile::base::aodh::evaluator", > "include tripleo::profile::base::aodh::listener", > "include tripleo::profile::base::aodh::notifier", > "2018-07-14 00:54:25,349 DEBUG: 9378 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-07-13.3", > "2018-07-14 00:54:25,349 DEBUG: 9378 -- volumes []", > "2018-07-14 00:54:25,349 INFO: 9378 -- Removing container: docker-puppet-aodh", > "2018-07-14 00:54:25,419 INFO: 9378 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-07-13.3", > "2018-07-14 00:54:27,498 DEBUG: 9378 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-api ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-api", > "8176fbf1d1e9: Pulling fs layer", > "6751a31c6b67: Pulling fs layer", > "6751a31c6b67: Download complete", > "8176fbf1d1e9: Verifying Checksum", > "8176fbf1d1e9: Download complete", > "8176fbf1d1e9: Pull complete", > "6751a31c6b67: Pull complete", > "Digest: sha256:af74e7a64a42b786ac108c15fa1eb72cd7633556d86d760ac2e002b5371c4ca4", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-07-13.3", > "2018-07-14 00:54:27,502 DEBUG: 9378 -- NET_HOST enabled", > "2018-07-14 00:54:27,502 DEBUG: 9378 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-aodh --env PUPPET_TAGS=file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config --env NAME=aodh --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpGQm2PK:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-07-13.3", > "2018-07-14 00:54:28,489 DEBUG: 9380 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-mariadb ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-mariadb", > "cb5754443281: Pulling fs layer", > "cb5754443281: Verifying Checksum", > "cb5754443281: Download complete", > "cb5754443281: Pull complete", > "Digest: sha256:4ead6efc298d581fc66bea4dce0dd83f2d1959858e09fc99a4eb2540ec3e6112", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3", > "2018-07-14 00:54:28,492 DEBUG: 9380 -- NET_HOST enabled", > "2018-07-14 00:54:28,492 DEBUG: 9380 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-clustercheck --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=clustercheck --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpBZd1dE:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3", > "2018-07-14 00:54:29,269 DEBUG: 9379 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.14 seconds", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/plugins]/ensure: created", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/port]/ensure: created", > "Notice: /Stage[main]/Sahara::Service::Api/Sahara_config[DEFAULT/api_workers]/ensure: created", > "Notice: /Stage[main]/Sahara::Logging/Oslo::Log[sahara_config]/Sahara_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Sahara::Logging/Oslo::Log[sahara_config]/Sahara_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Sahara::Policy/Oslo::Policy[sahara_config]/Sahara_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Default[sahara_config]/Sahara_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Rabbit[sahara_config]/Sahara_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Zmq[sahara_config]/Sahara_config[DEFAULT/rpc_zmq_host]/ensure: created", > "Notice: /Stage[main]/Sahara::Notify/Oslo::Messaging::Notifications[sahara_config]/Sahara_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Sahara::Notify/Oslo::Messaging::Notifications[sahara_config]/Sahara_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: Applied catalog in 1.54 seconds", > " Total: 25", > " Success: 25", > " Total: 196", > " Skipped: 23", > " Out of sync: 25", > " Changed: 25", > " Package: 0.05", > " Sahara config: 0.95", > " Last run: 1531529667", > " Config retrieval: 2.41", > " Total: 3.44", > " Config: 1531529664", > "Gathering files modified after 2018-07-14 00:54:18.226767730 +0000", > "2018-07-14 00:54:29,269 DEBUG: 9379 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template'", > "+ origin_of_time=/var/lib/config-data/sahara.origin_of_time", > "+ touch /var/lib/config-data/sahara.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template /etc/config.pp", > "Warning: ModuleLoader: module 'sahara' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/sahara/manifests/db.pp\", 69]:[\"/etc/puppet/modules/sahara/manifests/init.pp\", 380]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/sahara/manifests/policy.pp\", 34]:[\"/etc/puppet/modules/sahara/manifests/init.pp\", 381]", > "Warning: Scope(Class[Sahara]): The use_neutron parameter has been deprecated and will be removed in the future release.", > "Warning: Scope(Class[Sahara]): sahara::admin_user, sahara::admin_password, sahara::auth_uri, sahara::identity_uri, sahara::admin_tenant_name and sahara::memcached_servers are deprecated. Please use sahara::keystone::authtoken::* parameters instead.", > "Warning: Scope(Class[Sahara::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/sahara", > "++ stat -c %y /var/lib/config-data/sahara.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:54:18.226767730 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/sahara", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/sahara", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/sahara.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/sahara --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/sahara --mtime=1970-01-01", > "2018-07-14 00:54:29,269 INFO: 9379 -- Removing container: docker-puppet-sahara", > "2018-07-14 00:54:29,310 DEBUG: 9379 -- docker-puppet-sahara", > "2018-07-14 00:54:29,310 INFO: 9379 -- Finished processing puppet configs for sahara", > "2018-07-14 00:54:29,310 INFO: 9379 -- Starting configuration of mysql using image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3", > "2018-07-14 00:54:29,311 DEBUG: 9379 -- config_volume mysql", > "2018-07-14 00:54:29,311 DEBUG: 9379 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-07-14 00:54:29,311 DEBUG: 9379 -- manifest ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }", > "2018-07-14 00:54:29,311 DEBUG: 9379 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3", > "2018-07-14 00:54:29,311 DEBUG: 9379 -- volumes []", > "2018-07-14 00:54:29,311 INFO: 9379 -- Removing container: docker-puppet-mysql", > "2018-07-14 00:54:29,362 INFO: 9379 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3", > "2018-07-14 00:54:29,365 DEBUG: 9379 -- NET_HOST enabled", > "2018-07-14 00:54:29,365 DEBUG: 9379 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-mysql --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=mysql --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpNLyVTj:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-07-13.3", > "2018-07-14 00:54:34,747 DEBUG: 9380 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.47 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}af66ac91ee8468347f767986064977ca'", > "Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/content: content changed '{md5}9ff8cc688dd9f0dfc45e5afd25c427a7' to '{md5}7d37008224e71625019cb48768f267e7'", > "Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/mode: mode changed '0600' to '0644'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/Xinetd::Service[galera-monitor]/File[/etc/xinetd.d/galera-monitor]/ensure: defined content as '{md5}91dfb2fe68ee9c4085726861f8f6c14f'", > "Notice: Applied catalog in 0.04 seconds", > " Total: 4", > " Success: 4", > " Total: 13", > " Out of sync: 3", > " Changed: 3", > " Skipped: 9", > " File: 0.02", > " Config retrieval: 0.63", > " Total: 0.66", > " Last run: 1531529674", > " Config: 1531529673", > "Gathering files modified after 2018-07-14 00:54:28.707726592 +0000", > "2018-07-14 00:54:34,747 DEBUG: 9380 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,file ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,file'", > "+ origin_of_time=/var/lib/config-data/clustercheck.origin_of_time", > "+ touch /var/lib/config-data/clustercheck.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,file /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/clustercheck", > "++ stat -c %y /var/lib/config-data/clustercheck.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:54:28.707726592 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/clustercheck", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/clustercheck", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/clustercheck.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/clustercheck --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/clustercheck --mtime=1970-01-01", > "2018-07-14 00:54:34,747 INFO: 9380 -- Removing container: docker-puppet-clustercheck", > "2018-07-14 00:54:34,789 DEBUG: 9380 -- docker-puppet-clustercheck", > "2018-07-14 00:54:34,789 INFO: 9380 -- Finished processing puppet configs for clustercheck", > "2018-07-14 00:54:34,789 INFO: 9380 -- Starting configuration of redis using image 192.168.24.1:8787/rhosp14/openstack-redis:2018-07-13.3", > "2018-07-14 00:54:34,789 DEBUG: 9380 -- config_volume redis", > "2018-07-14 00:54:34,789 DEBUG: 9380 -- puppet_tags file,file_line,concat,augeas,cron,exec", > "2018-07-14 00:54:34,789 DEBUG: 9380 -- manifest include ::tripleo::profile::pacemaker::database::redis_bundle", > "2018-07-14 00:54:34,789 DEBUG: 9380 -- config_image 192.168.24.1:8787/rhosp14/openstack-redis:2018-07-13.3", > "2018-07-14 00:54:34,790 DEBUG: 9380 -- volumes []", > "2018-07-14 00:54:34,790 INFO: 9380 -- Removing container: docker-puppet-redis", > "2018-07-14 00:54:34,859 INFO: 9380 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-redis:2018-07-13.3", > "2018-07-14 00:54:38,434 DEBUG: 9380 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-redis ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-redis", > "710f1e0bf3af: Pulling fs layer", > "a4694055dcdb: Pulling fs layer", > "710f1e0bf3af: Verifying Checksum", > "710f1e0bf3af: Download complete", > "710f1e0bf3af: Pull complete", > "a4694055dcdb: Verifying Checksum", > "a4694055dcdb: Download complete", > "a4694055dcdb: Pull complete", > "Digest: sha256:2c7acdb41b5f0b390ff4c82c7fb2ebbc49c85f229492c669b75030a4b221e77b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-redis:2018-07-13.3", > "2018-07-14 00:54:38,437 DEBUG: 9380 -- NET_HOST enabled", > "2018-07-14 00:54:38,438 DEBUG: 9380 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-redis --env PUPPET_TAGS=file,file_line,concat,augeas,cron,exec --env NAME=redis --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp02mBAo:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-redis:2018-07-13.3", > "2018-07-14 00:54:40,740 DEBUG: 9379 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.79 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/ensure: defined content as '{md5}7f4c505cc1321ee86f302ced958251ed'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}b8460ad9db3e16641b57956f4246a153'", > "Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: content changed '{md5}af90358207ccfecae7af249d5ef7dd3e' to '{md5}022df3dfa64a3abc56c52c22d58de027'", > "Notice: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]/ensure: created", > "Notice: Applied catalog in 0.37 seconds", > " Skipped: 225", > " Total: 230", > " Out of sync: 4", > " Changed: 4", > " File: 0.03", > " Last run: 1531529679", > " Config retrieval: 5.18", > " Total: 5.20", > " Config: 1531529674", > "Gathering files modified after 2018-07-14 00:54:29.551723352 +0000", > "2018-07-14 00:54:40,740 DEBUG: 9379 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/mysql.origin_of_time", > "+ touch /var/lib/config-data/mysql.origin_of_time", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/mysql_bundle.pp\", 133]:[\"/etc/config.pp\", 4]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 103]:[\"/etc/config.pp\", 4]", > "Warning: ModuleLoader: module 'aodh' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/db/mysql.pp\", 57]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 175]", > "Warning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'glance' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'heat' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'panko' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/openstacklib/manifests/db/mysql/host_access.pp\", 43]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/mysql", > "++ stat -c %y /var/lib/config-data/mysql.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:54:29.551723352 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/mysql", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/mysql", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/mysql.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/mysql --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/mysql --mtime=1970-01-01", > "2018-07-14 00:54:40,740 INFO: 9379 -- Removing container: docker-puppet-mysql", > "2018-07-14 00:54:40,781 DEBUG: 9379 -- docker-puppet-mysql", > "2018-07-14 00:54:40,781 INFO: 9379 -- Finished processing puppet configs for mysql", > "2018-07-14 00:54:40,782 INFO: 9379 -- Starting configuration of nova using image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3", > "2018-07-14 00:54:40,782 DEBUG: 9379 -- config_volume nova", > "2018-07-14 00:54:40,782 DEBUG: 9379 -- puppet_tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config", > "2018-07-14 00:54:40,782 DEBUG: 9379 -- manifest ['Nova_cell_v2'].each |String $val| { noop_resource($val) }", > "include tripleo::profile::base::nova::conductor", > "include tripleo::profile::base::nova::consoleauth", > "include tripleo::profile::base::nova::scheduler", > "include tripleo::profile::base::nova::vncproxy", > "2018-07-14 00:54:40,782 DEBUG: 9379 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3", > "2018-07-14 00:54:40,782 DEBUG: 9379 -- volumes []", > "2018-07-14 00:54:40,782 INFO: 9379 -- Removing container: docker-puppet-nova", > "2018-07-14 00:54:40,849 INFO: 9379 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3", > "2018-07-14 00:54:41,517 DEBUG: 9378 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.39 seconds", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Aodh_config[api/gnocchi_external_project_owner]/ensure: created", > "Notice: /Stage[main]/Aodh::Evaluator/Aodh_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Db/Oslo::Db[aodh_config]/Aodh_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Aodh::Logging/Oslo::Log[aodh_config]/Aodh_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Aodh::Logging/Oslo::Log[aodh_config]/Aodh_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Rabbit[aodh_config]/Aodh_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Default[aodh_config]/Aodh_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Notifications[aodh_config]/Aodh_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Notifications[aodh_config]/Aodh_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Policy/Oslo::Policy[aodh_config]/Aodh_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Oslo::Middleware[aodh_config]/Aodh_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}bb6c65313c5b44190f83263f91fbf058'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[/var/www/cgi-bin/aodh]/owner: owner changed 'root' to 'aodh'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[/var/www/cgi-bin/aodh]/group: group changed 'root' to 'aodh'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[aodh_wsgi]/ensure: defined content as '{md5}09d823939c45501c11f2096289fe70cf'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/Apache::Vhost[aodh_wsgi]/Concat[10-aodh_wsgi.conf]/File[/etc/httpd/conf.d/10-aodh_wsgi.conf]/ensure: defined content as '{md5}7fe6680572fdff9302846f2bc7b753e0'", > "Notice: Applied catalog in 1.84 seconds", > " Changed: 109", > " Out of sync: 109", > " Total: 328", > " Skipped: 40", > " File: 0.36", > " Aodh config: 0.77", > " Config retrieval: 5.02", > " Total: 6.22", > "Gathering files modified after 2018-07-14 00:54:27.708730441 +0000", > "2018-07-14 00:54:41,517 DEBUG: 9378 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config'", > "+ origin_of_time=/var/lib/config-data/aodh.origin_of_time", > "+ touch /var/lib/config-data/aodh.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config /etc/config.pp", > "Warning: Unknown variable: 'undef'. at /etc/puppet/modules/aodh/manifests/init.pp:290:41", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/aodh.pp\", 123]", > "Warning: Scope(Class[Aodh::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Scope(Class[Aodh::Api]): host has no effect as of Newton and will be removed in a future \\", > "release. aodh::wsgi::apache supports setting a host via bind_host.", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/oslo/manifests/db.pp\", 132]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/aodh", > "++ stat -c %y /var/lib/config-data/aodh.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:54:27.708730441 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/aodh", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/aodh", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/aodh.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/aodh --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/aodh --mtime=1970-01-01", > "2018-07-14 00:54:41,517 INFO: 9378 -- Removing container: docker-puppet-aodh", > "2018-07-14 00:54:41,559 DEBUG: 9378 -- docker-puppet-aodh", > "2018-07-14 00:54:41,559 INFO: 9378 -- Finished processing puppet configs for aodh", > "2018-07-14 00:54:41,559 INFO: 9378 -- Starting configuration of heat_api using image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-07-13.3", > "2018-07-14 00:54:41,559 DEBUG: 9378 -- config_volume heat_api", > "2018-07-14 00:54:41,559 DEBUG: 9378 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-07-14 00:54:41,559 DEBUG: 9378 -- manifest include ::tripleo::profile::base::heat::api", > "2018-07-14 00:54:41,560 DEBUG: 9378 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-07-13.3", > "2018-07-14 00:54:41,560 DEBUG: 9378 -- volumes []", > "2018-07-14 00:54:41,560 INFO: 9378 -- Removing container: docker-puppet-heat_api", > "2018-07-14 00:54:41,625 INFO: 9378 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-07-13.3", > "2018-07-14 00:54:42,185 DEBUG: 9379 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-api ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-api", > "896eb5edb180: Already exists", > "37f5544620d2: Pulling fs layer", > "37f5544620d2: Download complete", > "37f5544620d2: Pull complete", > "Digest: sha256:09a9123092e507934eb8b7c80304856fedd694ac84147f8aceb3d36a2fc6d58f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3", > "2018-07-14 00:54:42,188 DEBUG: 9379 -- NET_HOST enabled", > "2018-07-14 00:54:42,189 DEBUG: 9379 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config --env NAME=nova --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp6dXqlX:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-07-13.3", > "2018-07-14 00:54:43,856 DEBUG: 9378 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-api ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-api", > "4a562ff27157: Pulling fs layer", > "9310caa1d73b: Pulling fs layer", > "9310caa1d73b: Verifying Checksum", > "9310caa1d73b: Download complete", > "4a562ff27157: Verifying Checksum", > "4a562ff27157: Download complete", > "4a562ff27157: Pull complete", > "9310caa1d73b: Pull complete", > "Digest: sha256:645a2aabde4dedef269c35411fc8b40b21712b4b835237ae64f326fcefca18d4", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-07-13.3", > "2018-07-14 00:54:43,859 DEBUG: 9378 -- NET_HOST enabled", > "2018-07-14 00:54:43,859 DEBUG: 9378 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat_api --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpjszgw8:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-07-13.3", > "2018-07-14 00:54:45,928 DEBUG: 9380 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.89 seconds", > "Notice: /Stage[main]/Redis::Config/File[/etc/redis]/ensure: created", > "Notice: /Stage[main]/Redis::Config/File[/var/log/redis]/mode: mode changed '0750' to '0755'", > "Notice: /Stage[main]/Redis::Config/File[/var/lib/redis]/mode: mode changed '0750' to '0755'", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/security/limits.d/redis.conf]/ensure: defined content as '{md5}a2f723773964f5ea42b6c7c5d6b72208'", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]/mode: mode changed '0644' to '0444'", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]/ensure: defined content as '{md5}fd03a7216a25230c464ceaaa58ed5af1'", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Triggered 'refresh' from 1 events", > "Notice: Applied catalog in 0.06 seconds", > " Total: 6", > " Success: 6", > " Restarted: 1", > " Skipped: 11", > " Total: 21", > " Out of sync: 6", > " Changed: 6", > " Exec: 0.00", > " Augeas: 0.01", > " File: 0.01", > " Config retrieval: 1.05", > " Total: 1.07", > " Last run: 1531529685", > " Config: 1531529684", > "Gathering files modified after 2018-07-14 00:54:38.616689217 +0000", > "2018-07-14 00:54:45,928 DEBUG: 9380 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,exec ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,exec'", > "+ origin_of_time=/var/lib/config-data/redis.origin_of_time", > "+ touch /var/lib/config-data/redis.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,exec /etc/config.pp", > "Warning: ModuleLoader: module 'redis' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/redis", > "++ stat -c %y /var/lib/config-data/redis.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:54:38.616689217 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/redis", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/redis", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/redis.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/redis --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/redis --mtime=1970-01-01", > "2018-07-14 00:54:45,929 INFO: 9380 -- Removing container: docker-puppet-redis", > "2018-07-14 00:54:45,971 DEBUG: 9380 -- docker-puppet-redis", > "2018-07-14 00:54:45,971 INFO: 9380 -- Finished processing puppet configs for redis", > "2018-07-14 00:54:45,971 INFO: 9380 -- Starting configuration of keystone using image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-07-13.3", > "2018-07-14 00:54:45,971 DEBUG: 9380 -- config_volume keystone", > "2018-07-14 00:54:45,971 DEBUG: 9380 -- puppet_tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config", > "2018-07-14 00:54:45,971 DEBUG: 9380 -- manifest ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }", > "2018-07-14 00:54:45,972 DEBUG: 9380 -- config_image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-07-13.3", > "2018-07-14 00:54:45,972 DEBUG: 9380 -- volumes []", > "2018-07-14 00:54:45,972 INFO: 9380 -- Removing container: docker-puppet-keystone", > "2018-07-14 00:54:46,042 INFO: 9380 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-keystone:2018-07-13.3", > "2018-07-14 00:54:48,474 DEBUG: 9380 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-keystone ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-keystone", > "ca4a11eee8b1: Pulling fs layer", > "bc9d6920d4a2: Pulling fs layer", > "bc9d6920d4a2: Download complete", > "ca4a11eee8b1: Verifying Checksum", > "ca4a11eee8b1: Download complete", > "ca4a11eee8b1: Pull complete", > "bc9d6920d4a2: Pull complete", > "Digest: sha256:7908e1d7401404e2ef2b004ce3afc18d0705bddb555a2e775851a9b05dce6c3a", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-keystone:2018-07-13.3", > "2018-07-14 00:54:48,477 DEBUG: 9380 -- NET_HOST enabled", > "2018-07-14 00:54:48,477 DEBUG: 9380 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-keystone --env PUPPET_TAGS=file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config --env NAME=keystone --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpULSsZ0:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-keystone:2018-07-13.3", > "2018-07-14 00:54:56,935 DEBUG: 9378 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.95 seconds", > "Notice: /Stage[main]/Heat::Cron::Purge_deleted/Cron[heat-manage purge_deleted]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin_password]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/auth_type]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/auth_url]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/username]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/password]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[clients_keystone/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[DEFAULT/max_json_body_size]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[ec2authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[yaql/limit_iterators]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[yaql/memory_quota]/ensure: created", > "Notice: /Stage[main]/Heat::Api/Heat_config[heat_api/bind_host]/ensure: created", > "Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/rpc_response_timeout]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Middleware[heat_config]/Heat_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/expose_headers]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/max_age]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/allow_headers]/ensure: created", > "Notice: /Stage[main]/Heat::Policy/Oslo::Policy[heat_config]/Heat_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}042afb7d5f7ffd4928ca14f318784b13'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[heat_api_wsgi]/ensure: defined content as '{md5}640891728ce5d46ae40234228561597c'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/Apache::Vhost[heat_api_wsgi]/Concat[10-heat_api_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_wsgi.conf]/ensure: defined content as '{md5}eab9a6bdbf51a46536011181f6884ead'", > "Notice: Applied catalog in 2.35 seconds", > " Total: 121", > " Success: 121", > " Changed: 121", > " Out of sync: 121", > " Skipped: 32", > " Total: 335", > " Cron: 0.01", > " File: 0.29", > " Heat config: 1.42", > " Last run: 1531529695", > " Config retrieval: 4.46", > " Total: 6.23", > " Config: 1531529688", > "Gathering files modified after 2018-07-14 00:54:44.069669255 +0000", > "2018-07-14 00:54:56,935 DEBUG: 9378 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,heat_config,file,concat,file_line ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line'", > "+ origin_of_time=/var/lib/config-data/heat_api.origin_of_time", > "+ touch /var/lib/config-data/heat_api.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/db.pp\", 75]:[\"/etc/puppet/modules/heat/manifests/init.pp\", 363]", > "Warning: Scope(Class[Heat::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/heat.pp\", 134]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat_api", > "++ stat -c %y /var/lib/config-data/heat_api.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:54:44.069669255 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat_api", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat_api", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat_api.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/heat_api --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/heat_api --mtime=1970-01-01", > "2018-07-14 00:54:56,936 INFO: 9378 -- Removing container: docker-puppet-heat_api", > "2018-07-14 00:54:56,982 DEBUG: 9378 -- docker-puppet-heat_api", > "2018-07-14 00:54:56,982 INFO: 9378 -- Finished processing puppet configs for heat_api", > "2018-07-14 00:54:56,982 INFO: 9378 -- Starting configuration of heat using image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-07-13.3", > "2018-07-14 00:54:56,982 DEBUG: 9378 -- config_volume heat", > "2018-07-14 00:54:56,982 DEBUG: 9378 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-07-14 00:54:56,982 DEBUG: 9378 -- manifest include ::tripleo::profile::base::heat::engine", > "2018-07-14 00:54:56,982 DEBUG: 9378 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-07-13.3", > "2018-07-14 00:54:56,982 DEBUG: 9378 -- volumes []", > "2018-07-14 00:54:56,983 INFO: 9378 -- Removing container: docker-puppet-heat", > "2018-07-14 00:54:57,033 INFO: 9378 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-07-13.3", > "2018-07-14 00:54:57,036 DEBUG: 9378 -- NET_HOST enabled", > "2018-07-14 00:54:57,036 DEBUG: 9378 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpm5TmDr:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-07-13.3", > "2018-07-14 00:55:01,218 DEBUG: 9380 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.17 seconds", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_token]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_bind_host]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_bind_host]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_port]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_port]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/expiration]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[ssl/enable]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[catalog/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[catalog/template_file]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/provider]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/notification_format]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[eventlet_server/admin_workers]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[eventlet_server/public_workers]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/0]/ensure: defined content as '{md5}b9a77f5687f567647b0b6fdd35b6d4bd'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/1]/ensure: defined content as '{md5}721fc19fc11fc9b5bbe797ce234eaba1'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/0]/ensure: defined content as '{md5}a61862e6cd6504d743bcdb7102ad4b8b'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/1]/ensure: defined content as '{md5}b1e15436b90708919aee0bdc457827be'", > "Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/key_repository]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/revoke_by_id]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/max_active_keys]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[credential/key_repository]/ensure: created", > "Notice: /Stage[main]/Keystone::Config/Keystone_config[ec2/driver]/ensure: created", > "Notice: /Stage[main]/Keystone::Cron::Token_flush/Cron[keystone-manage token_flush]/ensure: created", > "Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Keystone::Policy/Oslo::Policy[keystone_config]/Keystone_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Middleware[keystone_config]/Keystone_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Default[keystone_config]/Keystone_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}5975e0b18b0b0c1bef51d8e0f37c1933'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_main]/File[keystone_wsgi_main]/ensure: defined content as '{md5}072422f0d75777ed1783e6910b3ddc58'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_admin]/File[keystone_wsgi_admin]/ensure: defined content as '{md5}d6dda52b0e14d80a652ecf42686d3962'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/auth_mellon.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/auth_openidc.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_gssapi.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_mellon.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_openidc.conf]/ensure: removed", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_main]/Apache::Vhost[keystone_wsgi_main]/Concat[10-keystone_wsgi_main.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_main.conf]/ensure: defined content as '{md5}c4f5614861c12ca357594b92ce6d6662'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_admin]/Apache::Vhost[keystone_wsgi_admin]/Concat[10-keystone_wsgi_admin.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_admin.conf]/ensure: defined content as '{md5}85d3bbd8525cf63474814ffe8175be85'", > "Notice: Applied catalog in 2.37 seconds", > " Total: 126", > " Success: 126", > " Changed: 126", > " Out of sync: 126", > " Total: 324", > " Skipped: 34", > " Package: 0.04", > " File: 0.47", > " Keystone config: 1.19", > " Last run: 1531529699", > " Config retrieval: 4.70", > " Total: 6.44", > " Config: 1531529692", > "Gathering files modified after 2018-07-14 00:54:48.663652767 +0000", > "2018-07-14 00:55:01,218 DEBUG: 9380 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config'", > "+ origin_of_time=/var/lib/config-data/keystone.origin_of_time", > "+ touch /var/lib/config-data/keystone.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/policy.pp\", 34]:[\"/etc/puppet/modules/keystone/manifests/init.pp\", 757]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/init.pp\", 760]:[\"/etc/config.pp\", 3]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/init.pp\", 1108]:[\"/etc/config.pp\", 3]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/keystone", > "++ stat -c %y /var/lib/config-data/keystone.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:54:48.663652767 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/keystone", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/keystone", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/keystone.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/keystone --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/keystone --mtime=1970-01-01", > "2018-07-14 00:55:01,218 INFO: 9380 -- Removing container: docker-puppet-keystone", > "2018-07-14 00:55:01,266 DEBUG: 9380 -- docker-puppet-keystone", > "2018-07-14 00:55:01,266 INFO: 9380 -- Finished processing puppet configs for keystone", > "2018-07-14 00:55:01,266 INFO: 9380 -- Starting configuration of memcached using image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-07-13.3", > "2018-07-14 00:55:01,266 DEBUG: 9380 -- config_volume memcached", > "2018-07-14 00:55:01,266 DEBUG: 9380 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-07-14 00:55:01,267 DEBUG: 9380 -- manifest include ::tripleo::profile::base::memcached", > "2018-07-14 00:55:01,267 DEBUG: 9380 -- config_image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-07-13.3", > "2018-07-14 00:55:01,267 DEBUG: 9380 -- volumes []", > "2018-07-14 00:55:01,267 INFO: 9380 -- Removing container: docker-puppet-memcached", > "2018-07-14 00:55:01,338 INFO: 9380 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-memcached:2018-07-13.3", > "2018-07-14 00:55:02,772 DEBUG: 9380 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-memcached ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-memcached", > "9d4ebca5d8fc: Pulling fs layer", > "9d4ebca5d8fc: Verifying Checksum", > "9d4ebca5d8fc: Download complete", > "9d4ebca5d8fc: Pull complete", > "Digest: sha256:8a6117f8fff34dc7f305934f5ea7446118e82c94d355d56d430575196262ad0a", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-memcached:2018-07-13.3", > "2018-07-14 00:55:02,775 DEBUG: 9380 -- NET_HOST enabled", > "2018-07-14 00:55:02,775 DEBUG: 9380 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-memcached --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=memcached --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmptAMDZa:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-memcached:2018-07-13.3", > "2018-07-14 00:55:06,470 DEBUG: 9379 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 5.11 seconds", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}83682ae8a3a1df91e74526c9d0dd8aae'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[nova_api_wsgi]/ensure: defined content as '{md5}8bcfb466d72544dd31a4f339243ed669'", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/instance_name_template]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[wsgi/api_paste_config]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/enabled_apis]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen_port]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen_port]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_volume_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_workers]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_workers]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[api/use_forwarded_for]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[api/fping_path]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[neutron/service_metadata_proxy]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[neutron/metadata_proxy_shared_secret]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Conductor/Nova_config[conductor/workers]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/driver]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/discover_hosts_in_cells_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[scheduler/max_attempts]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/host_subset_size]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_io_ops_per_host]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_instances_per_host]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/weight_classes]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_host]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_port]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/auth_schemes]/ensure: created", > "Notice: /Stage[main]/Nova::Policy/Oslo::Policy[nova_config]/Nova_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Oslo::Middleware[nova_config]/Nova_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Nova::Cron::Archive_deleted_rows/Cron[nova-manage db archive_deleted_rows]/ensure: created", > "Notice: /Stage[main]/Nova::Cron::Purge_shadow_tables/Cron[nova-manage db purge]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/Apache::Vhost[nova_api_wsgi]/Concat[10-nova_api_wsgi.conf]/File[/etc/httpd/conf.d/10-nova_api_wsgi.conf]/ensure: defined content as '{md5}941c9134c2718080232e45ab19d4faa2'", > "Notice: Applied catalog in 11.14 seconds", > " Total: 180", > " Success: 180", > " Changed: 180", > " Out of sync: 180", > " Total: 501", > " Skipped: 75", > " Cron: 0.03", > " File: 0.44", > " Total: 15.98", > " Last run: 1531529704", > " Config retrieval: 5.81", > " Nova config: 9.58", > " Config: 1531529687", > "Gathering files modified after 2018-07-14 00:54:42.394675343 +0000", > "2018-07-14 00:55:06,470 DEBUG: 9379 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config'", > "+ origin_of_time=/var/lib/config-data/nova.origin_of_time", > "+ touch /var/lib/config-data/nova.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/api.pp\", 92]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 540]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/api.pp\", 92]", > "Warning: Unknown variable: '::nova::api::default_floating_pool'. at /etc/puppet/modules/nova/manifests/network/neutron.pp:112:38", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/nova/manifests/scheduler/filter.pp\", 150]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/scheduler.pp\", 32]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/nova", > "++ stat -c %y /var/lib/config-data/nova.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:54:42.394675343 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/nova.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/nova --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/nova --mtime=1970-01-01", > "2018-07-14 00:55:06,470 INFO: 9379 -- Removing container: docker-puppet-nova", > "2018-07-14 00:55:06,519 DEBUG: 9379 -- docker-puppet-nova", > "2018-07-14 00:55:06,519 INFO: 9379 -- Finished processing puppet configs for nova", > "2018-07-14 00:55:06,519 INFO: 9379 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-07-13.3", > "2018-07-14 00:55:06,519 DEBUG: 9379 -- config_volume iscsid", > "2018-07-14 00:55:06,520 DEBUG: 9379 -- puppet_tags file,file_line,concat,augeas,cron,iscsid_config", > "2018-07-14 00:55:06,520 DEBUG: 9379 -- manifest include ::tripleo::profile::base::iscsid", > "2018-07-14 00:55:06,520 DEBUG: 9379 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-07-13.3", > "2018-07-14 00:55:06,520 DEBUG: 9379 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-07-14 00:55:06,520 INFO: 9379 -- Removing container: docker-puppet-iscsid", > "2018-07-14 00:55:06,590 INFO: 9379 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-07-13.3", > "2018-07-14 00:55:07,221 DEBUG: 9379 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-iscsid ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-iscsid", > "4af463e6498b: Pulling fs layer", > "4af463e6498b: Verifying Checksum", > "4af463e6498b: Download complete", > "4af463e6498b: Pull complete", > "Digest: sha256:dde55bcf49dac3034a5370d8b718c4ce390c0e383e4a790f6b503a9a6c58ea2b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-07-13.3", > "2018-07-14 00:55:07,224 DEBUG: 9379 -- NET_HOST enabled", > "2018-07-14 00:55:07,224 DEBUG: 9379 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-iscsid --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp4jlaoA:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/iscsi:/etc/iscsi --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-07-13.3", > "2018-07-14 00:55:08,189 DEBUG: 9378 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.18 seconds", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/auth_encryption_key]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_metadata_server_url]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_waitcondition_server_url]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_resources_per_stack]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/num_engine_workers]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/convergence_engine]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/reauthentication_auth_method]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_nested_stack_depth]/ensure: created", > "Notice: Applied catalog in 2.05 seconds", > " Total: 48", > " Success: 48", > " Skipped: 21", > " Total: 223", > " Out of sync: 48", > " Changed: 48", > " Heat config: 1.71", > " Last run: 1531529707", > " Config retrieval: 2.55", > " Total: 4.34", > " Config: 1531529702", > "Gathering files modified after 2018-07-14 00:54:57.223622820 +0000", > "2018-07-14 00:55:08,189 DEBUG: 9378 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/heat.origin_of_time", > "+ touch /var/lib/config-data/heat.origin_of_time", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat", > "++ stat -c %y /var/lib/config-data/heat.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:54:57.223622820 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/heat --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/heat --mtime=1970-01-01", > "2018-07-14 00:55:08,189 INFO: 9378 -- Removing container: docker-puppet-heat", > "2018-07-14 00:55:08,228 DEBUG: 9378 -- docker-puppet-heat", > "2018-07-14 00:55:08,228 INFO: 9378 -- Finished processing puppet configs for heat", > "2018-07-14 00:55:08,228 INFO: 9378 -- Starting configuration of cinder using image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-07-13.3", > "2018-07-14 00:55:08,228 DEBUG: 9378 -- config_volume cinder", > "2018-07-14 00:55:08,229 DEBUG: 9378 -- puppet_tags file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line", > "2018-07-14 00:55:08,229 DEBUG: 9378 -- manifest include ::tripleo::profile::base::cinder::api", > "include ::tripleo::profile::base::cinder::backup::ceph", > "include ::tripleo::profile::base::cinder::scheduler", > "include ::tripleo::profile::base::lvm", > "2018-07-14 00:55:08,229 DEBUG: 9378 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-07-13.3", > "2018-07-14 00:55:08,229 DEBUG: 9378 -- volumes []", > "2018-07-14 00:55:08,229 INFO: 9378 -- Removing container: docker-puppet-cinder", > "2018-07-14 00:55:08,297 INFO: 9378 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-07-13.3", > "2018-07-14 00:55:08,987 DEBUG: 9380 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.64 seconds", > "Notice: /Stage[main]/Memcached/File[/etc/sysconfig/memcached]/content: content changed '{md5}a50ed62e82d31fb4cb2de2226650c545' to '{md5}509b14e05e7a9e88e2c5c41c6fb4591f'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Memcached/Systemd::Dropin_file[memcached.conf]/File[/etc/systemd/system/memcached.service.d]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Memcached/Systemd::Dropin_file[memcached.conf]/File[/etc/systemd/system/memcached.service.d/memcached.conf]/ensure: defined content as '{md5}c44e90292b030f86c3b82096b68fe9cc'", > "Notice: Applied catalog in 0.08 seconds", > " Total: 3", > " Success: 3", > " Skipped: 10", > " Config retrieval: 0.76", > " Total: 0.78", > " Last run: 1531529708", > " Config: 1531529707", > "Gathering files modified after 2018-07-14 00:55:02.957603310 +0000", > "2018-07-14 00:55:08,987 DEBUG: 9380 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/memcached.origin_of_time", > "+ touch /var/lib/config-data/memcached.origin_of_time", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/memcached", > "++ stat -c %y /var/lib/config-data/memcached.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:55:02.957603310 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/memcached", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/memcached", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/memcached.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/memcached --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/memcached --mtime=1970-01-01", > "2018-07-14 00:55:08,987 INFO: 9380 -- Removing container: docker-puppet-memcached", > "2018-07-14 00:55:09,028 DEBUG: 9380 -- docker-puppet-memcached", > "2018-07-14 00:55:09,029 INFO: 9380 -- Finished processing puppet configs for memcached", > "2018-07-14 00:55:09,029 INFO: 9380 -- Starting configuration of panko using image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-07-13.3", > "2018-07-14 00:55:09,029 DEBUG: 9380 -- config_volume panko", > "2018-07-14 00:55:09,029 DEBUG: 9380 -- puppet_tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config", > "2018-07-14 00:55:09,029 DEBUG: 9380 -- manifest include tripleo::profile::base::panko::api", > "2018-07-14 00:55:09,029 DEBUG: 9380 -- config_image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-07-13.3", > "2018-07-14 00:55:09,029 DEBUG: 9380 -- volumes []", > "2018-07-14 00:55:09,030 INFO: 9380 -- Removing container: docker-puppet-panko", > "2018-07-14 00:55:09,093 INFO: 9380 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-07-13.3", > "2018-07-14 00:55:11,394 DEBUG: 9380 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-panko-api ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-panko-api", > "ca3b5dfc8b92: Pulling fs layer", > "7e415afb30ef: Pulling fs layer", > "7e415afb30ef: Download complete", > "ca3b5dfc8b92: Verifying Checksum", > "ca3b5dfc8b92: Download complete", > "ca3b5dfc8b92: Pull complete", > "7e415afb30ef: Pull complete", > "Digest: sha256:7c271ef22d2febc78fa197f215f07ecb5c8dae74a2858093296a666488943e43", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-07-13.3", > "2018-07-14 00:55:11,398 DEBUG: 9380 -- NET_HOST enabled", > "2018-07-14 00:55:11,398 DEBUG: 9380 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-panko --env PUPPET_TAGS=file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config --env NAME=panko --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp0EhMl9:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-07-13.3", > "2018-07-14 00:55:13,264 DEBUG: 9379 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.38 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created", > "Notice: Applied catalog in 0.03 seconds", > " Total: 2", > " Success: 2", > " Total: 10", > " Out of sync: 2", > " Changed: 2", > " Skipped: 8", > " Exec: 0.02", > " Config retrieval: 0.51", > " Total: 0.53", > " Last run: 1531529712", > " Config: 1531529712", > "Gathering files modified after 2018-07-14 00:55:07.413588450 +0000", > "2018-07-14 00:55:13,264 DEBUG: 9379 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,iscsid_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,iscsid_config'", > "+ origin_of_time=/var/lib/config-data/iscsid.origin_of_time", > "+ touch /var/lib/config-data/iscsid.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,iscsid_config /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/iscsid", > "++ stat -c %y /var/lib/config-data/iscsid.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:55:07.413588450 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/iscsid", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/iscsid", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/iscsid.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/iscsid --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/iscsid --mtime=1970-01-01", > "2018-07-14 00:55:13,264 INFO: 9379 -- Removing container: docker-puppet-iscsid", > "2018-07-14 00:55:13,303 DEBUG: 9379 -- docker-puppet-iscsid", > "2018-07-14 00:55:13,303 INFO: 9379 -- Finished processing puppet configs for iscsid", > "2018-07-14 00:55:13,303 INFO: 9379 -- Starting configuration of glance_api using image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-07-13.3", > "2018-07-14 00:55:13,303 DEBUG: 9379 -- config_volume glance_api", > "2018-07-14 00:55:13,303 DEBUG: 9379 -- puppet_tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config", > "2018-07-14 00:55:13,303 DEBUG: 9379 -- manifest include ::tripleo::profile::base::glance::api", > "2018-07-14 00:55:13,303 DEBUG: 9379 -- config_image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-07-13.3", > "2018-07-14 00:55:13,304 DEBUG: 9379 -- volumes []", > "2018-07-14 00:55:13,304 INFO: 9379 -- Removing container: docker-puppet-glance_api", > "2018-07-14 00:55:13,370 INFO: 9379 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-07-13.3", > "2018-07-14 00:55:17,068 DEBUG: 9378 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-api ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-api", > "d06e8eb825ba: Pulling fs layer", > "d38d96b8dcf4: Pulling fs layer", > "d38d96b8dcf4: Verifying Checksum", > "d38d96b8dcf4: Download complete", > "d06e8eb825ba: Verifying Checksum", > "d06e8eb825ba: Download complete", > "d06e8eb825ba: Pull complete", > "d38d96b8dcf4: Pull complete", > "Digest: sha256:322fbc3f402ae76db7c59e06220b7d6b5e7bf92fadfb16f6b90d7796646f58d4", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-07-13.3", > "2018-07-14 00:55:17,071 DEBUG: 9378 -- NET_HOST enabled", > "2018-07-14 00:55:17,071 DEBUG: 9378 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-cinder --env PUPPET_TAGS=file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line --env NAME=cinder --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpm9w63D:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-07-13.3", > "2018-07-14 00:55:20,011 DEBUG: 9379 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-glance-api ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-glance-api", > "0c22feffde85: Pulling fs layer", > "e65c0e109c6d: Pulling fs layer", > "e65c0e109c6d: Verifying Checksum", > "e65c0e109c6d: Download complete", > "0c22feffde85: Verifying Checksum", > "0c22feffde85: Download complete", > "0c22feffde85: Pull complete", > "e65c0e109c6d: Pull complete", > "Digest: sha256:ca2e14c1413a43a84feea4d3ee070e3c52fb5eaeb5bc78a3bfe57a075f18e2e6", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-07-13.3", > "2018-07-14 00:55:20,015 DEBUG: 9379 -- NET_HOST enabled", > "2018-07-14 00:55:20,015 DEBUG: 9379 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-glance_api --env PUPPET_TAGS=file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config --env NAME=glance_api --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpbdXX7Q:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-07-13.3", > "2018-07-14 00:55:23,775 DEBUG: 9380 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.79 seconds", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/host]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/port]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/workers]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/max_limit]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[database/event_time_to_live]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_api_paste_ini[pipeline:main/pipeline]/ensure: created", > "Notice: /Stage[main]/Panko::Expirer/Cron[panko-expirer]/ensure: created", > "Notice: /Stage[main]/Panko::Logging/Oslo::Log[panko_config]/Panko_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Panko::Db/Oslo::Db[panko_config]/Panko_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Panko::Policy/Oslo::Policy[panko_config]/Panko_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Oslo::Middleware[panko_config]/Panko_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}ec1df407a586ae4810b1cdd609e806c5'", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/File[/var/www/cgi-bin/panko]/ensure: created", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/File[panko_wsgi]/ensure: defined content as '{md5}e6f446b6267321fd2251a3e83021181a'", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/Apache::Vhost[panko_wsgi]/Concat[10-panko_wsgi.conf]/File[/etc/httpd/conf.d/10-panko_wsgi.conf]/ensure: defined content as '{md5}24caa591e95864c9ae3dc779454df7c9'", > "Notice: Applied catalog in 1.13 seconds", > " Total: 101", > " Success: 101", > " Changed: 101", > " Out of sync: 101", > " Total: 255", > " Panko api paste ini: 0.00", > " Panko config: 0.13", > " Last run: 1531529722", > " Config retrieval: 4.34", > " Total: 4.84", > " Config: 1531529717", > "Gathering files modified after 2018-07-14 00:55:11.576574796 +0000", > "2018-07-14 00:55:23,775 DEBUG: 9380 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config'", > "+ origin_of_time=/var/lib/config-data/panko.origin_of_time", > "+ touch /var/lib/config-data/panko.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/panko/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/panko.pp\", 32]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/panko/manifests/db.pp\", 59]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/panko/api.pp\", 83]", > "Warning: Scope(Class[Panko::Api]): This Class is deprecated and will be removed in future releases.", > "Warning: Scope(Class[Panko::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/panko", > "++ stat -c %y /var/lib/config-data/panko.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:55:11.576574796 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/panko", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/panko", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/panko.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/panko --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/panko --mtime=1970-01-01", > "2018-07-14 00:55:23,775 INFO: 9380 -- Removing container: docker-puppet-panko", > "2018-07-14 00:55:23,830 DEBUG: 9380 -- docker-puppet-panko", > "2018-07-14 00:55:23,830 INFO: 9380 -- Finished processing puppet configs for panko", > "2018-07-14 00:55:23,831 INFO: 9380 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", > "2018-07-14 00:55:23,831 DEBUG: 9380 -- config_volume crond", > "2018-07-14 00:55:23,831 DEBUG: 9380 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-07-14 00:55:23,831 DEBUG: 9380 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-07-14 00:55:23,831 DEBUG: 9380 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", > "2018-07-14 00:55:23,831 DEBUG: 9380 -- volumes []", > "2018-07-14 00:55:23,831 INFO: 9380 -- Removing container: docker-puppet-crond", > "2018-07-14 00:55:23,900 INFO: 9380 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", > "2018-07-14 00:55:24,387 DEBUG: 9380 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "67ba27f668e7: Pulling fs layer", > "67ba27f668e7: Verifying Checksum", > "67ba27f668e7: Download complete", > "67ba27f668e7: Pull complete", > "Digest: sha256:2fd3b666f7247ced06a7fe1bfd5cc9b639c221a94e5e00f16aac56fa8e534d4e", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", > "2018-07-14 00:55:24,390 DEBUG: 9380 -- NET_HOST enabled", > "2018-07-14 00:55:24,390 DEBUG: 9380 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpIEQBSE:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-07-13.3", > "2018-07-14 00:55:30,116 DEBUG: 9380 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.40 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}5281f207697925ddab4d83d74a751eb4'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > " Skipped: 7", > " Total: 9", > " Config retrieval: 0.49", > " Total: 0.50", > " Last run: 1531529729", > " Config: 1531529728", > "Gathering files modified after 2018-07-14 00:55:24.556533622 +0000", > "2018-07-14 00:55:30,116 DEBUG: 9380 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:55:24.556533622 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/crond --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-07-14 00:55:30,116 INFO: 9380 -- Removing container: docker-puppet-crond", > "2018-07-14 00:55:30,149 DEBUG: 9380 -- docker-puppet-crond", > "2018-07-14 00:55:30,150 INFO: 9380 -- Finished processing puppet configs for crond", > "2018-07-14 00:55:30,150 INFO: 9380 -- Starting configuration of haproxy using image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-07-13.3", > "2018-07-14 00:55:30,150 DEBUG: 9380 -- config_volume haproxy", > "2018-07-14 00:55:30,150 DEBUG: 9380 -- puppet_tags file,file_line,concat,augeas,cron,haproxy_config", > "2018-07-14 00:55:30,150 DEBUG: 9380 -- manifest exec {'wait-for-settle': command => '/bin/true' }", > "2018-07-14 00:55:30,150 DEBUG: 9380 -- config_image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-07-13.3", > "2018-07-14 00:55:30,150 DEBUG: 9380 -- volumes [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']", > "2018-07-14 00:55:30,150 INFO: 9380 -- Removing container: docker-puppet-haproxy", > "2018-07-14 00:55:30,223 INFO: 9380 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-07-13.3", > "2018-07-14 00:55:31,229 DEBUG: 9379 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.55 seconds", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_port]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/workers]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_image_direct_url]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_multiple_locations]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_cache_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enabled_import_methods]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/node_staging_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_member_quota]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v1_api]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v2_api]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/os_region_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/stores]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_cache_config[glance_store/os_region_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/registry_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/registry_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[paste_deploy/flavor]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_user]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_pool]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/default_store]/ensure: created", > "Notice: /Stage[main]/Glance::Policy/Oslo::Policy[glance_api_config]/Glance_api_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Db/Oslo::Db[glance_api_config]/Glance_api_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_file]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_file]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Oslo::Middleware[glance_api_config]/Glance_api_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Rabbit[glance_api_config]/Glance_api_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Default[glance_api_config]/Glance_api_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: Applied catalog in 2.72 seconds", > " Total: 44", > " Success: 44", > " Total: 254", > " Out of sync: 44", > " Changed: 44", > " Skipped: 60", > " Glance cache config: 0.14", > " Glance api config: 2.29", > " Config retrieval: 2.96", > " Total: 5.47", > " Config: 1531529724", > "Gathering files modified after 2018-07-14 00:55:20.207547185 +0000", > "2018-07-14 00:55:31,229 DEBUG: 9379 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config'", > "+ origin_of_time=/var/lib/config-data/glance_api.origin_of_time", > "+ touch /var/lib/config-data/glance_api.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/glance/manifests/config.pp\", 48]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/glance/api.pp\", 198]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/glance/manifests/api/db.pp\", 69]:[\"/etc/puppet/modules/glance/manifests/api.pp\", 371]", > "Warning: Unknown variable: 'default_store_real'. at /etc/puppet/modules/glance/manifests/api.pp:438:9", > "Warning: Scope(Class[Glance::Api]): default_store not provided, it will be automatically set to http", > "Warning: Scope(Class[Glance::Api::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/glance_api", > "++ stat -c %y /var/lib/config-data/glance_api.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:55:20.207547185 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/glance_api", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/glance_api", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/glance_api.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/glance_api --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/glance_api --mtime=1970-01-01", > "2018-07-14 00:55:31,229 INFO: 9379 -- Removing container: docker-puppet-glance_api", > "2018-07-14 00:55:31,275 DEBUG: 9379 -- docker-puppet-glance_api", > "2018-07-14 00:55:31,275 INFO: 9379 -- Finished processing puppet configs for glance_api", > "2018-07-14 00:55:31,275 INFO: 9379 -- Starting configuration of rabbitmq using image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-07-13.3", > "2018-07-14 00:55:31,275 DEBUG: 9379 -- config_volume rabbitmq", > "2018-07-14 00:55:31,275 DEBUG: 9379 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-07-14 00:55:31,275 DEBUG: 9379 -- manifest ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }", > "2018-07-14 00:55:31,276 DEBUG: 9379 -- config_image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-07-13.3", > "2018-07-14 00:55:31,276 DEBUG: 9379 -- volumes []", > "2018-07-14 00:55:31,276 INFO: 9379 -- Removing container: docker-puppet-rabbitmq", > "2018-07-14 00:55:31,356 INFO: 9379 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-07-13.3", > "2018-07-14 00:55:34,246 DEBUG: 9380 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-haproxy ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-haproxy", > "67e56271bfbc: Pulling fs layer", > "67e56271bfbc: Verifying Checksum", > "67e56271bfbc: Download complete", > "67e56271bfbc: Pull complete", > "Digest: sha256:5308bca68c06678fa2f9bc31e59ea9ca11c3a928581b1a19b5e639a942fa93af", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-07-13.3", > "2018-07-14 00:55:34,249 DEBUG: 9380 -- NET_HOST enabled", > "2018-07-14 00:55:34,249 DEBUG: 9380 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-haproxy --env PUPPET_TAGS=file,file_line,concat,augeas,cron,haproxy_config --env NAME=haproxy --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp8h_lu0:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/ipa/ca.crt:/etc/ipa/ca.crt:ro --volume /etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro --volume /etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro --volume /etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-07-13.3", > "2018-07-14 00:55:34,793 DEBUG: 9378 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.54 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Lvm/Augeas[udev options in lvm.conf]/returns: executed successfully", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}5303103164928fbe6eaa296315220ab6'", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v3_api]/ensure: created", > "Notice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]/ensure: created", > "Notice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_version]/ensure: created", > "Notice: /Stage[main]/Cinder::Cron::Db_purge/Cron[cinder-manage db purge]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_listen]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_workers]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/nova_catalog_info]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[key_manager/backend]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_user]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_chunk_size]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_pool]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_unit]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_count]/ensure: created", > "Notice: /Stage[main]/Cinder::Scheduler/Cinder_config[DEFAULT/scheduler_driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]/ensure: created", > "Notice: /Stage[main]/Cinder::Backends/Cinder_config[tripleo_ceph/backend_host]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Cinder::Policy/Oslo::Policy[cinder_config]/Cinder_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Oslo::Middleware[cinder_config]/Cinder_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/File[cinder_wsgi]/ensure: defined content as '{md5}870efbe437d63cd260287cd36472d7b1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File[/etc/sysconfig/openstack-cinder-volume]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]/ensure: created", > "Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/Apache::Vhost[cinder_wsgi]/Concat[10-cinder_wsgi.conf]/File[/etc/httpd/conf.d/10-cinder_wsgi.conf]/ensure: defined content as '{md5}5eaf86f639908d62ff32df48a88a1abd'", > "Notice: Applied catalog in 5.12 seconds", > " Total: 134", > " Success: 134", > " Changed: 134", > " Out of sync: 134", > " Skipped: 37", > " Total: 375", > " File line: 0.00", > " Package: 0.06", > " File: 0.22", > " Augeas: 0.72", > " Last run: 1531529732", > " Cinder config: 3.41", > " Config retrieval: 5.19", > " Total: 9.62", > " Config: 1531529722", > "Gathering files modified after 2018-07-14 00:55:17.292556407 +0000", > "2018-07-14 00:55:34,793 DEBUG: 9378 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line'", > "+ origin_of_time=/var/lib/config-data/cinder.origin_of_time", > "+ touch /var/lib/config-data/cinder.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/db.pp\", 69]:[\"/etc/puppet/modules/cinder/manifests/init.pp\", 320]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/cinder.pp\", 127]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/api.pp\", 203]:[\"/etc/config.pp\", 2]", > "Warning: Scope(Class[Cinder::Api]): The nova_catalog_admin_info parameter has been deprecated and will be removed in the future release.", > "Warning: Scope(Class[Cinder::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/backup.pp:83:18", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/volume.pp:64:18", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/cinder", > "++ stat -c %y /var/lib/config-data/cinder.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:55:17.292556407 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/cinder", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/cinder", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/cinder.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/cinder --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/cinder --mtime=1970-01-01", > "2018-07-14 00:55:34,793 INFO: 9378 -- Removing container: docker-puppet-cinder", > "2018-07-14 00:55:35,157 DEBUG: 9378 -- docker-puppet-cinder", > "2018-07-14 00:55:35,158 INFO: 9378 -- Finished processing puppet configs for cinder", > "2018-07-14 00:55:35,158 INFO: 9378 -- Starting configuration of swift using image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3", > "2018-07-14 00:55:35,158 DEBUG: 9378 -- config_volume swift", > "2018-07-14 00:55:35,158 DEBUG: 9378 -- puppet_tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server", > "2018-07-14 00:55:35,158 DEBUG: 9378 -- manifest include ::tripleo::profile::base::swift::proxy", > "include ::tripleo::profile::base::swift::storage", > "2018-07-14 00:55:35,158 DEBUG: 9378 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3", > "2018-07-14 00:55:35,158 DEBUG: 9378 -- volumes []", > "2018-07-14 00:55:35,159 INFO: 9378 -- Removing container: docker-puppet-swift", > "2018-07-14 00:55:35,210 INFO: 9378 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3", > "2018-07-14 00:55:35,214 DEBUG: 9378 -- NET_HOST enabled", > "2018-07-14 00:55:35,214 DEBUG: 9378 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-swift --env PUPPET_TAGS=file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server --env NAME=swift --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpXBklZC:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-07-13.3", > "2018-07-14 00:55:36,288 DEBUG: 9379 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-rabbitmq ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-rabbitmq", > "21f769109c5a: Pulling fs layer", > "21f769109c5a: Verifying Checksum", > "21f769109c5a: Download complete", > "21f769109c5a: Pull complete", > "Digest: sha256:8401abbaaba0df9b993423fd1a2bea75e3623f99ea483094283c370b6ac8ab50", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-07-13.3", > "2018-07-14 00:55:36,291 DEBUG: 9379 -- NET_HOST enabled", > "2018-07-14 00:55:36,292 DEBUG: 9379 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-rabbitmq --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=rabbitmq --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpwGzkiC:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-07-13.3", > "2018-07-14 00:55:44,123 DEBUG: 9380 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/content: content changed '{md5}1f337186b0e1ba5ee82760cb437fb810' to '{md5}9094205fc8a174f05236cdf1fdac7020'", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/mode: mode changed '0644' to '0640'", > "Notice: Applied catalog in 0.34 seconds", > " Changed: 1", > " Out of sync: 1", > " Total: 76", > " File: 0.09", > " Last run: 1531529743", > " Config retrieval: 2.82", > " Total: 2.92", > " Config: 1531529740", > "Gathering files modified after 2018-07-14 00:55:34.502503452 +0000", > "2018-07-14 00:55:44,123 DEBUG: 9380 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,haproxy_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,haproxy_config'", > "+ origin_of_time=/var/lib/config-data/haproxy.origin_of_time", > "+ touch /var/lib/config-data/haproxy.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,haproxy_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/pacemaker/haproxy_with_vip.pp\", 65]:", > "Warning: Scope(Haproxy::Config[haproxy]): haproxy: The $merge_options parameter will default to true in the next major release. Please review the documentation regarding the implications.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/haproxy", > "++ stat -c %y /var/lib/config-data/haproxy.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:55:34.502503452 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/haproxy", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/haproxy", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/haproxy.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/haproxy --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/haproxy --mtime=1970-01-01", > "2018-07-14 00:55:44,123 INFO: 9380 -- Removing container: docker-puppet-haproxy", > "2018-07-14 00:55:44,171 DEBUG: 9380 -- docker-puppet-haproxy", > "2018-07-14 00:55:44,171 INFO: 9380 -- Finished processing puppet configs for haproxy", > "2018-07-14 00:55:44,171 INFO: 9380 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3", > "2018-07-14 00:55:44,171 DEBUG: 9380 -- config_volume ceilometer", > "2018-07-14 00:55:44,171 DEBUG: 9380 -- puppet_tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config", > "2018-07-14 00:55:44,171 DEBUG: 9380 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "include ::tripleo::profile::base::ceilometer::agent::notification", > "2018-07-14 00:55:44,171 DEBUG: 9380 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3", > "2018-07-14 00:55:44,172 DEBUG: 9380 -- volumes []", > "2018-07-14 00:55:44,172 INFO: 9380 -- Removing container: docker-puppet-ceilometer", > "2018-07-14 00:55:44,247 INFO: 9380 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3", > "2018-07-14 00:55:44,603 DEBUG: 9378 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.03 seconds", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/api_class]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/username]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_name]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[filter:cache/memcache_servers]/value: value changed '127.0.0.1:11211' to '172.17.1.19:11211'", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/auto_create_account_prefix]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/concurrency]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/expiring_objects_account_name]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/interval]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/process]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/processes]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/reclaim_age]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/recon_cache_path]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/report_interval]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_level]/ensure: created", > "Notice: /Stage[main]/Rsync::Server/Xinetd::Service[rsync]/File[/rsync]/ensure: defined content as '{md5}498d0e172c7a3fbb087b878a96cea1ac'", > "Notice: /Stage[main]/Rsync::Server/Concat[/etc/rsyncd.conf]/File[/etc/rsyncd.conf]/content: content changed '{md5}c63fccb45c0dcbbbe17d0f4bdba920ec' to '{md5}6415979642f10d672f4a5c731d924910'", > "Notice: /Stage[main]/Swift/Swift_config[swift-hash/swift_hash_path_suffix]/value: value changed '%SWIFT_HASH_PATH_SUFFIX%' to 'OUr0bBsepQ1Eu944d7h7xHGAN'", > "Notice: /Stage[main]/Swift/Swift_config[swift-constraints/max_header_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/bind_ip]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/workers]/value: value changed '8' to 'auto'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_level]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_headers]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_address]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[pipeline:main/pipeline]/value: value changed 'catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server' to 'catch_errors healthcheck proxy-logging cache ratelimit bulk tempurl formpost authtoken s3api s3token keystone staticweb copy container_quotas account_quotas slo dlo versioned_writes proxy-logging proxy-server'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_level]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_address]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/log_handoffs]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/allow_account_management]/value: value changed 'true' to 'True'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/account_autocreate]/value: value changed 'true' to 'True'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/node_timeout]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Cache/Swift_proxy_config[filter:cache/memcache_servers]/value: value changed '127.0.0.1:11211' to '172.17.1.19:11211'", > "Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/operator_roles]/value: value changed 'admin, SwiftOperator' to 'admin, swiftoperator, ResellerAdmin'", > "Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/reseller_prefix]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/File[/var/cache/swift]/mode: mode changed '0755' to '0700'", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/signing_dir]/value: value changed '/tmp/keystone-signing-swift' to '/var/cache/swift'", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_plugin]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/username]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/password]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/delay_auth_decision]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/cache]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/include_service_catalog]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/url_base]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/clock_accuracy]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/max_sleep_time_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/log_sleep_time_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/rate_buffer_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/account_ratelimit]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Formpost/Swift_proxy_config[filter:formpost/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_containers_per_extraction]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_failed_extractions]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_deletes_per_request]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/yield_frequency]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Versioned_writes/Swift_proxy_config[filter:versioned_writes/allow_versioned_writes]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_segments]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/min_segment_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_after_segment]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_segments_per_sec]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_get_time]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_after_segment]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_segments_per_sec]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/max_get_time]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Copy/Swift_proxy_config[filter:copy/object_post_as_copy]/value: value changed 'false' to 'True'", > "Notice: /Stage[main]/Swift::Proxy::Container_quotas/Swift_proxy_config[filter:container_quotas/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Account_quotas/Swift_proxy_config[filter:account_quotas/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/disable_encryption]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/keymaster_config_path]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/auth_pipeline_check]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/auth_uri]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node/d1]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/File[/etc/swift/account-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/File[/etc/swift/container-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/File[/etc/swift/object-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/etc/swift/account-server.conf]/ensure: defined content as '{md5}b6737ec59f5d720158ff08ed26083b6c'", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/etc/swift/container-server.conf]/ensure: defined content as '{md5}af182f0db84e3d5035b1bb0a5fa6f286'", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/etc/swift/object-server.conf]/ensure: defined content as '{md5}9409e1e316f4c4e9ee6a1558f50aef52'", > "Notice: Applied catalog in 0.67 seconds", > " Total: 97", > " Success: 97", > " Total: 192", > " Out of sync: 97", > " Changed: 97", > " Swift config: 0.00", > " Swift keymaster config: 0.01", > " Swift object expirer config: 0.01", > " File: 0.04", > " Swift proxy config: 0.23", > " Config retrieval: 2.44", > " Total: 2.73", > "Gathering files modified after 2018-07-14 00:55:35.425500711 +0000", > "2018-07-14 00:55:44,604 DEBUG: 9378 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server'", > "+ origin_of_time=/var/lib/config-data/swift.origin_of_time", > "+ touch /var/lib/config-data/swift.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 147]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/proxy.pp\", 163]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 148]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/proxy.pp\", 165]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 148]", > "Warning: Unknown variable: 'methods_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:100:56", > "Warning: Unknown variable: 'incoming_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:101:56", > "Warning: Unknown variable: 'incoming_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:102:56", > "Warning: Unknown variable: 'outgoing_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:103:56", > "Warning: Unknown variable: 'outgoing_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:104:56", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the object storage server has changed from 6000 to 6200 and will be changed in a later release", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the container storage server has changed from 6001 to 6201 and will be changed in a later release", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the account storage server has changed from 6002 to 6202 and will be changed in a later release", > "Warning: Class 'xinetd' is already defined at /etc/config.pp:6; cannot redefine at /etc/puppet/modules/xinetd/manifests/init.pp:12", > "Warning: Unknown variable: 'xinetd::params::default_user'. at /etc/puppet/modules/xinetd/manifests/service.pp:110:14", > "Warning: Unknown variable: 'xinetd::params::default_group'. at /etc/puppet/modules/xinetd/manifests/service.pp:116:15", > "Warning: Unknown variable: 'xinetd::confdir'. at /etc/puppet/modules/xinetd/manifests/service.pp:161:13", > "Warning: Unknown variable: 'xinetd::service_name'. at /etc/puppet/modules/xinetd/manifests/service.pp:166:24", > "Warning: Unknown variable: 'xinetd::confdir'. at /etc/puppet/modules/xinetd/manifests/service.pp:167:21", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/swift/manifests/storage/server.pp\", 183]:", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/storage/server.pp\", 197]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/swift", > "++ stat -c %y /var/lib/config-data/swift.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:55:35.425500711 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/swift", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/swift", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/swift.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/swift --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/swift --mtime=1970-01-01", > "2018-07-14 00:55:44,604 INFO: 9378 -- Removing container: docker-puppet-swift", > "2018-07-14 00:55:44,650 DEBUG: 9378 -- docker-puppet-swift", > "2018-07-14 00:55:44,650 INFO: 9378 -- Finished processing puppet configs for swift", > "2018-07-14 00:55:44,651 INFO: 9378 -- Starting configuration of heat_api_cfn using image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-07-13.3", > "2018-07-14 00:55:44,651 DEBUG: 9378 -- config_volume heat_api_cfn", > "2018-07-14 00:55:44,651 DEBUG: 9378 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-07-14 00:55:44,651 DEBUG: 9378 -- manifest include ::tripleo::profile::base::heat::api_cfn", > "2018-07-14 00:55:44,651 DEBUG: 9378 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-07-13.3", > "2018-07-14 00:55:44,651 DEBUG: 9378 -- volumes []", > "2018-07-14 00:55:44,652 INFO: 9378 -- Removing container: docker-puppet-heat_api_cfn", > "2018-07-14 00:55:44,726 INFO: 9378 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-07-13.3", > "2018-07-14 00:55:45,321 DEBUG: 9378 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn", > "4a562ff27157: Already exists", > "ec3980339b6d: Pulling fs layer", > "ec3980339b6d: Verifying Checksum", > "ec3980339b6d: Download complete", > "ec3980339b6d: Pull complete", > "Digest: sha256:c5b515b900a32fc7ed8f88514e4f9943636f9871bce4c186df7e92d78187cedf", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-07-13.3", > "2018-07-14 00:55:45,325 DEBUG: 9378 -- NET_HOST enabled", > "2018-07-14 00:55:45,325 DEBUG: 9378 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat_api_cfn --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api_cfn --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpaOgJcA:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-07-13.3", > "2018-07-14 00:55:46,552 DEBUG: 9380 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-central ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-central", > "5cdb8407851d: Pulling fs layer", > "2d6d2b1829e0: Pulling fs layer", > "5cdb8407851d: Verifying Checksum", > "5cdb8407851d: Download complete", > "2d6d2b1829e0: Verifying Checksum", > "2d6d2b1829e0: Download complete", > "5cdb8407851d: Pull complete", > "2d6d2b1829e0: Pull complete", > "Digest: sha256:9806c986ccd96861ec0dfb6a2d768a8df3a0d7a03b629c5ca436bea04c217565", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3", > "2018-07-14 00:55:46,555 DEBUG: 9380 -- NET_HOST enabled", > "2018-07-14 00:55:46,556 DEBUG: 9380 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-ceilometer --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpVNSSOb:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-07-13.3", > "2018-07-14 00:55:47,580 DEBUG: 9379 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.87 seconds", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/group: group changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]/ensure: created", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/ensure: defined content as '{md5}bf0433a058106978128deffab0e1d5e3'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/ensure: defined content as '{md5}12f8d1a1f9f57f23c1be6c7bf2286e73'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]/ensure: defined content as '{md5}44d4ef5cb86ab30e6127e83939ef09c4'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]/ensure: created", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]/ensure: defined content as '{md5}91d370d2c5a1af171c9d5b5985fca733'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/ensure: defined content as '{md5}1030abc4db405b5f2969643e99bc7435'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/content: content changed '{md5}b346ec0a8320f85f795bf612f6b02da7' to '{md5}cff53aa59e5080a780201735d2dbc2ab'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/mode: mode changed '0644' to '0640'", > "Notice: Applied catalog in 0.07 seconds", > " Total: 12", > " Success: 12", > " Total: 19", > " Out of sync: 9", > " Changed: 9", > " Total: 1.09", > " Last run: 1531529746", > " Config: 1531529745", > "Gathering files modified after 2018-07-14 00:55:36.471497616 +0000", > "2018-07-14 00:55:47,580 DEBUG: 9379 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/rabbitmq.origin_of_time", > "+ touch /var/lib/config-data/rabbitmq.origin_of_time", > "Warning: ModuleLoader: module 'rabbitmq' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/rabbitmq", > "++ stat -c %y /var/lib/config-data/rabbitmq.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:55:36.471497616 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/rabbitmq", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/rabbitmq", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/rabbitmq.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/rabbitmq --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/rabbitmq --mtime=1970-01-01", > "2018-07-14 00:55:47,580 INFO: 9379 -- Removing container: docker-puppet-rabbitmq", > "2018-07-14 00:55:47,627 DEBUG: 9379 -- docker-puppet-rabbitmq", > "2018-07-14 00:55:47,627 INFO: 9379 -- Finished processing puppet configs for rabbitmq", > "2018-07-14 00:55:47,627 INFO: 9379 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", > "2018-07-14 00:55:47,627 DEBUG: 9379 -- config_volume neutron", > "2018-07-14 00:55:47,627 DEBUG: 9379 -- puppet_tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-07-14 00:55:47,627 DEBUG: 9379 -- manifest include tripleo::profile::base::neutron::server", > "include ::tripleo::profile::base::neutron::plugins::ml2", > "include tripleo::profile::base::neutron::dhcp", > "include tripleo::profile::base::neutron::l3", > "include tripleo::profile::base::neutron::metadata", > "include ::tripleo::profile::base::neutron::ovs", > "2018-07-14 00:55:47,628 DEBUG: 9379 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", > "2018-07-14 00:55:47,628 DEBUG: 9379 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-07-14 00:55:47,628 INFO: 9379 -- Removing container: docker-puppet-neutron", > "2018-07-14 00:55:47,692 INFO: 9379 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", > "2018-07-14 00:55:52,989 DEBUG: 9379 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-server ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-server", > "28e21e52f8ed: Pulling fs layer", > "f5518f3fd279: Pulling fs layer", > "f5518f3fd279: Verifying Checksum", > "f5518f3fd279: Download complete", > "28e21e52f8ed: Verifying Checksum", > "28e21e52f8ed: Download complete", > "28e21e52f8ed: Pull complete", > "f5518f3fd279: Pull complete", > "Digest: sha256:55b94d798a314329ba8115df66256b4d8917ec23b2c18dfe2c5135022a98c7de", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", > "2018-07-14 00:55:52,992 DEBUG: 9379 -- NET_HOST enabled", > "2018-07-14 00:55:52,992 DEBUG: 9379 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-neutron --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 --env NAME=neutron --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpWF4the:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-07-13.3", > "2018-07-14 00:55:54,632 DEBUG: 9380 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.40 seconds", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/filter_project]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/archive_policy]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/resources_definition_file]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/interface]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Notification/File[event_pipeline]/ensure: defined content as '{md5}dafea5c96d5da5251f9b8a275c6d71aa'", > "Notice: /Stage[main]/Ceilometer::Agent::Notification/Ceilometer_config[notification/ack_on_event_error]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created", > "Notice: Applied catalog in 0.80 seconds", > " Total: 29", > " Success: 29", > " Total: 156", > " Out of sync: 29", > " Changed: 29", > " Skipped: 35", > " Ceilometer config: 0.68", > " Config retrieval: 1.69", > " Last run: 1531529753", > " Total: 2.37", > " Config: 1531529751", > "Gathering files modified after 2018-07-14 00:55:46.730467927 +0000", > "2018-07-14 00:55:54,632 DEBUG: 9380 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config'", > "+ origin_of_time=/var/lib/config-data/ceilometer.origin_of_time", > "+ touch /var/lib/config-data/ceilometer.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config /etc/config.pp", > "Warning: ModuleLoader: module 'ceilometer' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/config.pp\", 35]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer.pp\", 111]", > "Warning: Scope(Class[Ceilometer::Dispatcher::Gnocchi]): The class ceilometer::dispatcher::gnocchi is deprecated. All its", > " options must be set as url parameters in", > " ceilometer::agent::notification::pipeline_publishers. Depending of the used", > " Gnocchi version their might be ignored.", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/agent/notification.pp\", 118]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer/agent/notification.pp\", 34]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/ceilometer", > "++ stat -c %y /var/lib/config-data/ceilometer.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:55:46.730467927 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/ceilometer", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/ceilometer", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/ceilometer.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/ceilometer --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/ceilometer --mtime=1970-01-01", > "2018-07-14 00:55:54,632 INFO: 9380 -- Removing container: docker-puppet-ceilometer", > "2018-07-14 00:55:54,669 DEBUG: 9380 -- docker-puppet-ceilometer", > "2018-07-14 00:55:54,669 INFO: 9380 -- Finished processing puppet configs for ceilometer", > "2018-07-14 00:55:59,021 DEBUG: 9378 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.10 seconds", > "Notice: /Stage[main]/Heat::Api_cfn/Heat_config[heat_api_cfn/bind_host]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}3b47578b8a31380b1bddee31860ea7d6'", > "Notice: /Stage[main]/Apache::Mod::Headers/Apache::Mod[headers]/File[headers.load]/ensure: defined content as '{md5}96094c96352002c43ada5bdf8650ff38'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[heat_api_cfn_wsgi]/ensure: defined content as '{md5}c3ae61ab87649c8cdfab8977da2b194b'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/Apache::Vhost[heat_api_cfn_wsgi]/Concat[10-heat_api_cfn_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_cfn_wsgi.conf]/ensure: defined content as '{md5}b4af740ed263d2a65c84859ab506df76'", > "Notice: Applied catalog in 2.69 seconds", > " Total: 122", > " Success: 122", > " Changed: 122", > " Out of sync: 122", > " Total: 337", > " File: 0.42", > " Heat config: 1.64", > " Last run: 1531529757", > " Config retrieval: 4.72", > " Total: 6.83", > " Config: 1531529750", > "Gathering files modified after 2018-07-14 00:55:45.532471334 +0000", > "2018-07-14 00:55:59,021 DEBUG: 9378 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/heat_api_cfn.origin_of_time", > "+ touch /var/lib/config-data/heat_api_cfn.origin_of_time", > " with Stdlib::Compat::Integer. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/wsgi/apache_api_cfn.pp\", 125]:[\"/etc/config.pp\", 2]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat_api_cfn", > "++ stat -c %y /var/lib/config-data/heat_api_cfn.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:55:45.532471334 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat_api_cfn", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat_api_cfn", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat_api_cfn.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/heat_api_cfn --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/heat_api_cfn --mtime=1970-01-01", > "2018-07-14 00:55:59,021 INFO: 9378 -- Removing container: docker-puppet-heat_api_cfn", > "2018-07-14 00:55:59,067 DEBUG: 9378 -- docker-puppet-heat_api_cfn", > "2018-07-14 00:55:59,068 INFO: 9378 -- Finished processing puppet configs for heat_api_cfn", > "2018-07-14 00:56:06,006 DEBUG: 9379 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.89 seconds", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_url]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/username]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/password]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/endpoint_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/tenant_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_status_changes]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_data_changes]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/max_l3_agents_per_router]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/api_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/rpc_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_scheduler_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_distributed]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/enable_dvr]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/allow_automatic_l3agent_failover]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_firewall_rule]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_network_gateway]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_packet_filter]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_isolated_metadata]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/force_metadata]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_metadata_network]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/resync_interval]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/interface_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_dns_servers]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/interface_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/agent_mode]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/vxlan_udp_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Neutron::Policy/Oslo::Policy[neutron_config]/Neutron_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Oslo::Middleware[neutron_config]/Neutron_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created", > "Notice: Applied catalog in 1.77 seconds", > " Total: 104", > " Success: 104", > " Changed: 104", > " Out of sync: 104", > " Total: 356", > " Skipped: 44", > " Neutron api config: 0.00", > " Neutron agent ovs: 0.01", > " Neutron l3 agent config: 0.01", > " Neutron metadata agent config: 0.02", > " Neutron plugin ml2: 0.03", > " Neutron dhcp agent config: 0.03", > " Augeas: 0.05", > " Neutron config: 1.19", > " Last run: 1531529764", > " Config retrieval: 4.35", > " Total: 5.74", > " Config: 1531529758", > "Gathering files modified after 2018-07-14 00:55:53.177449868 +0000", > "2018-07-14 00:56:06,006 DEBUG: 9379 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2'", > "+ origin_of_time=/var/lib/config-data/neutron.origin_of_time", > "+ touch /var/lib/config-data/neutron.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/neutron/manifests/init.pp\", 486]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/server.pp\", 104]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/config.pp\", 136]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp\", 141]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/db.pp\", 69]:[\"/etc/puppet/modules/neutron/manifests/server.pp\", 284]", > "Warning: Scope(Class[Neutron::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Unknown variable: '::neutron::params::metadata_agent_package'. at /etc/puppet/modules/neutron/manifests/agents/metadata.pp:122:6", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 207]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/ovs.pp\", 59]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/neutron", > "++ stat -c %y /var/lib/config-data/neutron.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:55:53.177449868 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/neutron", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/neutron", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/neutron.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/neutron --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/neutron --mtime=1970-01-01", > "2018-07-14 00:56:06,006 INFO: 9379 -- Removing container: docker-puppet-neutron", > "2018-07-14 00:56:06,040 DEBUG: 9379 -- docker-puppet-neutron", > "2018-07-14 00:56:06,040 INFO: 9379 -- Finished processing puppet configs for neutron", > "2018-07-14 00:56:06,040 INFO: 9379 -- Starting configuration of horizon using image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-07-13.3", > "2018-07-14 00:56:06,040 DEBUG: 9379 -- config_volume horizon", > "2018-07-14 00:56:06,040 DEBUG: 9379 -- puppet_tags file,file_line,concat,augeas,cron,horizon_config", > "2018-07-14 00:56:06,040 DEBUG: 9379 -- manifest include ::tripleo::profile::base::horizon", > "2018-07-14 00:56:06,041 DEBUG: 9379 -- config_image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-07-13.3", > "2018-07-14 00:56:06,041 DEBUG: 9379 -- volumes []", > "2018-07-14 00:56:06,041 INFO: 9379 -- Removing container: docker-puppet-horizon", > "2018-07-14 00:56:06,104 INFO: 9379 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-horizon:2018-07-13.3", > "2018-07-14 00:56:11,466 DEBUG: 9379 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-horizon ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-horizon", > "e7641a6454ac: Pulling fs layer", > "e7641a6454ac: Download complete", > "e7641a6454ac: Pull complete", > "Digest: sha256:11df8012de8bf276be7c66f7e0e6a61c8c3752261bd6e0a9eb59ea714324e69a", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-horizon:2018-07-13.3", > "2018-07-14 00:56:11,469 DEBUG: 9379 -- NET_HOST enabled", > "2018-07-14 00:56:11,469 DEBUG: 9379 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-horizon --env PUPPET_TAGS=file,file_line,concat,augeas,cron,horizon_config --env NAME=horizon --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpzWOQ4I:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-horizon:2018-07-13.3", > "2018-07-14 00:56:21,446 DEBUG: 9379 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.34 seconds", > "Notice: /Stage[main]/Apache::Mod::Remoteip/File[remoteip.conf]/ensure: defined content as '{md5}384b8caa3e78c74589d234885b5120ef'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon]/mode: mode changed '0750' to '0751'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon/horizon.log]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}14fe5f39cffddb420b07c89503b2ac4b'", > "Notice: /Stage[main]/Apache::Mod::Remoteip/Apache::Mod[remoteip]/File[remoteip.load]/ensure: defined content as '{md5}118eb7518a1d018a162d23dfe32c4bad'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/content: content changed '{md5}11477350f4c10069548dc52fd24afd3e' to '{md5}a9dba5e4af0549a764b70ee01a677dc0'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/owner: owner changed 'horizon' to 'apache'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/group: group changed 'horizon' to 'apache'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/etc/httpd/conf.d/openstack-dashboard.conf]/content: content changed '{md5}4cb4b1391d3553951208fad1ce791e5c' to '{md5}3f4b1c53d0e150dae37b3ee5dcaf622d'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[10-horizon_vhost.conf]/File[/etc/httpd/conf.d/10-horizon_vhost.conf]/ensure: defined content as '{md5}24b416abb5ff53d31ce5502da2259893'", > "Notice: Applied catalog in 0.61 seconds", > " Total: 86", > " Success: 86", > " Total: 172", > " Out of sync: 84", > " Changed: 84", > " File: 0.23", > " Last run: 1531529780", > " Config retrieval: 2.74", > " Total: 2.98", > " Config: 1531529777", > "Gathering files modified after 2018-07-14 00:56:11.675400502 +0000", > "2018-07-14 00:56:21,447 DEBUG: 9379 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,horizon_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,horizon_config'", > "+ origin_of_time=/var/lib/config-data/horizon.origin_of_time", > "+ touch /var/lib/config-data/horizon.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,horizon_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/horizon.pp\", 97]:[\"/etc/config.pp\", 2]", > "Warning: ModuleLoader: module 'horizon' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Undefined variable ''; ", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 579]:[\"/etc/config.pp\", 2]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 580]:[\"/etc/config.pp\", 2]", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 582]:[\"/etc/config.pp\", 2]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/horizon", > "++ stat -c %y /var/lib/config-data/horizon.origin_of_time", > "+ echo 'Gathering files modified after 2018-07-14 00:56:11.675400502 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/horizon", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/horizon", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/horizon.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/horizon --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/horizon --mtime=1970-01-01", > "2018-07-14 00:56:21,447 INFO: 9379 -- Removing container: docker-puppet-horizon", > "2018-07-14 00:56:21,501 DEBUG: 9379 -- docker-puppet-horizon", > "2018-07-14 00:56:21,501 INFO: 9379 -- Finished processing puppet configs for horizon", > "2018-07-14 00:56:21,502 DEBUG: 9377 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-07-14 00:56:21,502 DEBUG: 9377 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-07-14 00:56:21,504 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-07-14 00:56:21,504 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-07-14 00:56:21,505 DEBUG: 9377 -- Updating config hash for mysql_bootstrap, config_volume=heat_api_cfn hash=e3e4add3b600966ca1882253a7152031", > "2018-07-14 00:56:21,505 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-07-14 00:56:21,505 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-07-14 00:56:21,505 DEBUG: 9377 -- Updating config hash for rabbitmq_bootstrap, config_volume=heat_api_cfn hash=12ac90a0b3d54ed3256a387220a496b2", > "2018-07-14 00:56:21,505 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/memcached/etc/sysconfig.md5sum for config_volume /var/lib/config-data/memcached/etc/sysconfig", > "2018-07-14 00:56:21,507 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_placement.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_placement", > "2018-07-14 00:56:21,507 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/nova_placement.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_placement", > "2018-07-14 00:56:21,507 DEBUG: 9377 -- Updating config hash for nova_placement, config_volume=heat_api_cfn hash=20beb9ad1d8f412b7176455207ee5c49", > "2018-07-14 00:56:21,508 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,508 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,508 DEBUG: 9377 -- Updating config hash for swift_rsync_fix, config_volume=heat_api_cfn hash=dcbec3b34ef4eefbd74d46f8f2b365c0", > "2018-07-14 00:56:21,508 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-07-14 00:56:21,508 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-07-14 00:56:21,508 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/heat/etc/heat.md5sum for config_volume /var/lib/config-data/heat/etc/heat", > "2018-07-14 00:56:21,509 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/heat/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/heat/etc/my.cnf.d", > "2018-07-14 00:56:21,509 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data.md5sum for config_volume /var/lib/config-data", > "2018-07-14 00:56:21,509 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/swift/etc", > "2018-07-14 00:56:21,509 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-07-14 00:56:21,509 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-07-14 00:56:21,509 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-07-14 00:56:21,509 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-07-14 00:56:21,509 DEBUG: 9377 -- Updating config hash for keystone_cron, config_volume=heat_api_cfn hash=3cceb185fba55f30d28055ef07c87b1e", > "2018-07-14 00:56:21,509 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/panko/etc.md5sum for config_volume /var/lib/config-data/panko/etc", > "2018-07-14 00:56:21,509 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/panko/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/panko/etc/my.cnf.d", > "2018-07-14 00:56:21,510 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-07-14 00:56:21,510 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-07-14 00:56:21,510 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-07-14 00:56:21,510 DEBUG: 9377 -- Updating config hash for keystone_db_sync, config_volume=heat_api_cfn hash=3cceb185fba55f30d28055ef07c87b1e", > "2018-07-14 00:56:21,510 DEBUG: 9377 -- Updating config hash for keystone, config_volume=heat_api_cfn hash=3cceb185fba55f30d28055ef07c87b1e", > "2018-07-14 00:56:21,510 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/aodh/etc/aodh.md5sum for config_volume /var/lib/config-data/aodh/etc/aodh", > "2018-07-14 00:56:21,510 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/aodh/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/aodh/etc/my.cnf.d", > "2018-07-14 00:56:21,510 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-07-14 00:56:21,510 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-07-14 00:56:21,510 DEBUG: 9377 -- Updating config hash for neutron_ovs_bridge, config_volume=heat_api_cfn hash=62c71abc54f0ccf32a636a87ddce28cd", > "2018-07-14 00:56:21,511 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/cinder/etc/cinder.md5sum for config_volume /var/lib/config-data/cinder/etc/cinder", > "2018-07-14 00:56:21,511 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/cinder/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/cinder/etc/my.cnf.d", > "2018-07-14 00:56:21,511 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-07-14 00:56:21,511 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-07-14 00:56:21,511 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-07-14 00:56:21,511 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-07-14 00:56:21,511 DEBUG: 9377 -- Updating config hash for glance_api_db_sync, config_volume=heat_api_cfn hash=506edd2712cc7cba6de7e8435218538a", > "2018-07-14 00:56:21,511 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/neutron/etc.md5sum for config_volume /var/lib/config-data/neutron/etc", > "2018-07-14 00:56:21,511 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/neutron/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/neutron/etc/my.cnf.d", > "2018-07-14 00:56:21,511 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/neutron/usr/share.md5sum for config_volume /var/lib/config-data/neutron/usr/share", > "2018-07-14 00:56:21,511 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/sahara/etc/sahara.md5sum for config_volume /var/lib/config-data/sahara/etc/sahara", > "2018-07-14 00:56:21,511 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/horizon.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon", > "2018-07-14 00:56:21,511 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/horizon.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon", > "2018-07-14 00:56:21,512 DEBUG: 9377 -- Updating config hash for horizon, config_volume=heat_api_cfn hash=e0843b0088530426fcf1645ddd4ab15f", > "2018-07-14 00:56:21,513 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/clustercheck.md5sum for config_volume /var/lib/config-data/puppet-generated/clustercheck", > "2018-07-14 00:56:21,513 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/clustercheck.md5sum for config_volume /var/lib/config-data/puppet-generated/clustercheck", > "2018-07-14 00:56:21,514 DEBUG: 9377 -- Updating config hash for clustercheck, config_volume=heat_api_cfn hash=13757698e02d8bb5f09611f2bebffcaf", > "2018-07-14 00:56:21,514 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-07-14 00:56:21,514 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-07-14 00:56:21,514 DEBUG: 9377 -- Updating config hash for mysql_restart_bundle, config_volume=heat_api_cfn hash=e3e4add3b600966ca1882253a7152031", > "2018-07-14 00:56:21,514 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/haproxy.md5sum for config_volume /var/lib/config-data/puppet-generated/haproxy", > "2018-07-14 00:56:21,514 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/haproxy.md5sum for config_volume /var/lib/config-data/puppet-generated/haproxy", > "2018-07-14 00:56:21,514 DEBUG: 9377 -- Updating config hash for haproxy_restart_bundle, config_volume=heat_api_cfn hash=2ad5acd211f2fc974b50d8c63162b217", > "2018-07-14 00:56:21,514 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-07-14 00:56:21,514 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-07-14 00:56:21,514 DEBUG: 9377 -- Updating config hash for rabbitmq_restart_bundle, config_volume=heat_api_cfn hash=12ac90a0b3d54ed3256a387220a496b2", > "2018-07-14 00:56:21,514 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/horizon/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon/etc", > "2018-07-14 00:56:21,514 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/redis.md5sum for config_volume /var/lib/config-data/puppet-generated/redis", > "2018-07-14 00:56:21,515 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/redis.md5sum for config_volume /var/lib/config-data/puppet-generated/redis", > "2018-07-14 00:56:21,515 DEBUG: 9377 -- Updating config hash for redis_restart_bundle, config_volume=heat_api_cfn hash=673272ba7baa62ddc4a9338ffb010dd0", > "2018-07-14 00:56:21,516 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-07-14 00:56:21,516 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-07-14 00:56:21,516 DEBUG: 9377 -- Updating config hash for cinder_volume_restart_bundle, config_volume=heat_api_cfn hash=b980578074f84676eb00570abaa83add", > "2018-07-14 00:56:21,516 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-07-14 00:56:21,516 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-07-14 00:56:21,516 DEBUG: 9377 -- Updating config hash for gnocchi_statsd, config_volume=heat_api_cfn hash=56a6c68412dc07322e9f98ed837b593a", > "2018-07-14 00:56:21,517 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-07-14 00:56:21,517 DEBUG: 9377 -- Updating config hash for cinder_backup_restart_bundle, config_volume=heat_api_cfn hash=b980578074f84676eb00570abaa83add", > "2018-07-14 00:56:21,517 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-07-14 00:56:21,517 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-07-14 00:56:21,517 DEBUG: 9377 -- Updating config hash for gnocchi_metricd, config_volume=heat_api_cfn hash=56a6c68412dc07322e9f98ed837b593a", > "2018-07-14 00:56:21,517 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-07-14 00:56:21,517 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-07-14 00:56:21,517 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/ceilometer/etc/ceilometer.md5sum for config_volume /var/lib/config-data/ceilometer/etc/ceilometer", > "2018-07-14 00:56:21,517 DEBUG: 9377 -- Updating config hash for gnocchi_api, config_volume=heat_api_cfn hash=56a6c68412dc07322e9f98ed837b593a", > "2018-07-14 00:56:21,519 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,519 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,519 DEBUG: 9377 -- Updating config hash for swift_container_updater, config_volume=heat_api_cfn hash=dcbec3b34ef4eefbd74d46f8f2b365c0", > "2018-07-14 00:56:21,519 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-07-14 00:56:21,519 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-07-14 00:56:21,520 DEBUG: 9377 -- Updating config hash for aodh_evaluator, config_volume=heat_api_cfn hash=5742cec2901444da556613a81d458170", > "2018-07-14 00:56:21,520 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-07-14 00:56:21,520 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-07-14 00:56:21,520 DEBUG: 9377 -- Updating config hash for nova_scheduler, config_volume=heat_api_cfn hash=e96b0acbbc9a56e61a73fe86bc2632f4", > "2018-07-14 00:56:21,520 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,520 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,520 DEBUG: 9377 -- Updating config hash for swift_object_server, config_volume=heat_api_cfn hash=dcbec3b34ef4eefbd74d46f8f2b365c0", > "2018-07-14 00:56:21,520 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-07-14 00:56:21,520 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-07-14 00:56:21,520 DEBUG: 9377 -- Updating config hash for cinder_api, config_volume=heat_api_cfn hash=b980578074f84676eb00570abaa83add", > "2018-07-14 00:56:21,521 DEBUG: 9377 -- Updating config hash for swift_proxy, config_volume=heat_api_cfn hash=dcbec3b34ef4eefbd74d46f8f2b365c0", > "2018-07-14 00:56:21,521 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-07-14 00:56:21,521 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-07-14 00:56:21,521 DEBUG: 9377 -- Updating config hash for neutron_dhcp, config_volume=heat_api_cfn hash=62c71abc54f0ccf32a636a87ddce28cd", > "2018-07-14 00:56:21,521 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-07-14 00:56:21,521 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-07-14 00:56:21,521 DEBUG: 9377 -- Updating config hash for heat_api, config_volume=heat_api_cfn hash=8bc93c11495732654c347ab1cf42381f", > "2018-07-14 00:56:21,521 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,521 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,521 DEBUG: 9377 -- Updating config hash for swift_object_auditor, config_volume=heat_api_cfn hash=dcbec3b34ef4eefbd74d46f8f2b365c0", > "2018-07-14 00:56:21,521 DEBUG: 9377 -- Updating config hash for neutron_metadata_agent, config_volume=heat_api_cfn hash=62c71abc54f0ccf32a636a87ddce28cd", > "2018-07-14 00:56:21,521 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-07-14 00:56:21,521 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-07-14 00:56:21,522 DEBUG: 9377 -- Updating config hash for ceilometer_agent_central, config_volume=heat_api_cfn hash=55d5ba4542e583550edcf5d89e174499", > "2018-07-14 00:56:21,522 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,522 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,522 DEBUG: 9377 -- Updating config hash for swift_account_replicator, config_volume=heat_api_cfn hash=dcbec3b34ef4eefbd74d46f8f2b365c0", > "2018-07-14 00:56:21,522 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-07-14 00:56:21,522 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-07-14 00:56:21,522 DEBUG: 9377 -- Updating config hash for aodh_notifier, config_volume=heat_api_cfn hash=5742cec2901444da556613a81d458170", > "2018-07-14 00:56:21,522 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-07-14 00:56:21,522 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-07-14 00:56:21,522 DEBUG: 9377 -- Updating config hash for nova_api_cron, config_volume=heat_api_cfn hash=e96b0acbbc9a56e61a73fe86bc2632f4", > "2018-07-14 00:56:21,522 DEBUG: 9377 -- Updating config hash for nova_consoleauth, config_volume=heat_api_cfn hash=e96b0acbbc9a56e61a73fe86bc2632f4", > "2018-07-14 00:56:21,523 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-07-14 00:56:21,523 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-07-14 00:56:21,523 DEBUG: 9377 -- Updating config hash for gnocchi_db_sync, config_volume=heat_api_cfn hash=56a6c68412dc07322e9f98ed837b593a", > "2018-07-14 00:56:21,523 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,523 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,523 DEBUG: 9377 -- Updating config hash for swift_account_reaper, config_volume=heat_api_cfn hash=dcbec3b34ef4eefbd74d46f8f2b365c0", > "2018-07-14 00:56:21,523 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-07-14 00:56:21,523 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-07-14 00:56:21,523 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-07-14 00:56:21,523 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-07-14 00:56:21,523 DEBUG: 9377 -- Updating config hash for ceilometer_agent_notification, config_volume=heat_api_cfn hash=55d5ba4542e583550edcf5d89e174499-4f55854cf24371e622da3818cd970e5e", > "2018-07-14 00:56:21,523 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-07-14 00:56:21,523 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-07-14 00:56:21,523 DEBUG: 9377 -- Updating config hash for nova_vnc_proxy, config_volume=heat_api_cfn hash=e96b0acbbc9a56e61a73fe86bc2632f4", > "2018-07-14 00:56:21,524 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,524 DEBUG: 9377 -- Updating config hash for swift_rsync, config_volume=heat_api_cfn hash=dcbec3b34ef4eefbd74d46f8f2b365c0", > "2018-07-14 00:56:21,524 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-07-14 00:56:21,524 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-07-14 00:56:21,524 DEBUG: 9377 -- Updating config hash for nova_api, config_volume=heat_api_cfn hash=e96b0acbbc9a56e61a73fe86bc2632f4", > "2018-07-14 00:56:21,524 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-07-14 00:56:21,524 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-07-14 00:56:21,524 DEBUG: 9377 -- Updating config hash for aodh_api, config_volume=heat_api_cfn hash=5742cec2901444da556613a81d458170", > "2018-07-14 00:56:21,524 DEBUG: 9377 -- Updating config hash for nova_metadata, config_volume=heat_api_cfn hash=e96b0acbbc9a56e61a73fe86bc2632f4", > "2018-07-14 00:56:21,524 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat.md5sum for config_volume /var/lib/config-data/puppet-generated/heat", > "2018-07-14 00:56:21,524 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/heat.md5sum for config_volume /var/lib/config-data/puppet-generated/heat", > "2018-07-14 00:56:21,524 DEBUG: 9377 -- Updating config hash for heat_engine, config_volume=heat_api_cfn hash=7cdc59bda4371ca2b899f49e78193f14", > "2018-07-14 00:56:21,524 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,525 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,525 DEBUG: 9377 -- Updating config hash for swift_container_server, config_volume=heat_api_cfn hash=dcbec3b34ef4eefbd74d46f8f2b365c0", > "2018-07-14 00:56:21,525 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,525 DEBUG: 9377 -- Updating config hash for swift_object_replicator, config_volume=heat_api_cfn hash=dcbec3b34ef4eefbd74d46f8f2b365c0", > "2018-07-14 00:56:21,525 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-07-14 00:56:21,525 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-07-14 00:56:21,525 DEBUG: 9377 -- Updating config hash for neutron_l3_agent, config_volume=heat_api_cfn hash=62c71abc54f0ccf32a636a87ddce28cd", > "2018-07-14 00:56:21,525 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-07-14 00:56:21,525 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-07-14 00:56:21,525 DEBUG: 9377 -- Updating config hash for cinder_scheduler, config_volume=heat_api_cfn hash=b980578074f84676eb00570abaa83add", > "2018-07-14 00:56:21,525 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-07-14 00:56:21,525 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-07-14 00:56:21,526 DEBUG: 9377 -- Updating config hash for nova_conductor, config_volume=heat_api_cfn hash=e96b0acbbc9a56e61a73fe86bc2632f4", > "2018-07-14 00:56:21,526 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api_cfn.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api_cfn", > "2018-07-14 00:56:21,526 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api_cfn.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api_cfn", > "2018-07-14 00:56:21,526 DEBUG: 9377 -- Updating config hash for heat_api_cfn, config_volume=heat_api_cfn hash=9128c8ab568236022de2203d112978d1", > "2018-07-14 00:56:21,526 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-07-14 00:56:21,526 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-07-14 00:56:21,526 DEBUG: 9377 -- Updating config hash for sahara_api, config_volume=heat_api_cfn hash=4fbccbcd38b893f3d20fa125e4b0199c", > "2018-07-14 00:56:21,526 DEBUG: 9377 -- Updating config hash for sahara_engine, config_volume=heat_api_cfn hash=4fbccbcd38b893f3d20fa125e4b0199c", > "2018-07-14 00:56:21,526 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-07-14 00:56:21,526 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-07-14 00:56:21,526 DEBUG: 9377 -- Updating config hash for neutron_ovs_agent, config_volume=heat_api_cfn hash=62c71abc54f0ccf32a636a87ddce28cd", > "2018-07-14 00:56:21,526 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-07-14 00:56:21,527 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-07-14 00:56:21,527 DEBUG: 9377 -- Updating config hash for cinder_api_cron, config_volume=heat_api_cfn hash=b980578074f84676eb00570abaa83add", > "2018-07-14 00:56:21,527 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,527 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,527 DEBUG: 9377 -- Updating config hash for swift_account_auditor, config_volume=heat_api_cfn hash=dcbec3b34ef4eefbd74d46f8f2b365c0", > "2018-07-14 00:56:21,527 DEBUG: 9377 -- Updating config hash for swift_container_replicator, config_volume=heat_api_cfn hash=dcbec3b34ef4eefbd74d46f8f2b365c0", > "2018-07-14 00:56:21,527 DEBUG: 9377 -- Updating config hash for swift_object_updater, config_volume=heat_api_cfn hash=dcbec3b34ef4eefbd74d46f8f2b365c0", > "2018-07-14 00:56:21,527 DEBUG: 9377 -- Updating config hash for swift_object_expirer, config_volume=heat_api_cfn hash=dcbec3b34ef4eefbd74d46f8f2b365c0", > "2018-07-14 00:56:21,528 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-07-14 00:56:21,528 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-07-14 00:56:21,528 DEBUG: 9377 -- Updating config hash for heat_api_cron, config_volume=heat_api_cfn hash=8bc93c11495732654c347ab1cf42381f", > "2018-07-14 00:56:21,528 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,528 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,528 DEBUG: 9377 -- Updating config hash for swift_container_auditor, config_volume=heat_api_cfn hash=dcbec3b34ef4eefbd74d46f8f2b365c0", > "2018-07-14 00:56:21,528 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-07-14 00:56:21,528 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-07-14 00:56:21,528 DEBUG: 9377 -- Updating config hash for panko_api, config_volume=heat_api_cfn hash=4f55854cf24371e622da3818cd970e5e", > "2018-07-14 00:56:21,528 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-07-14 00:56:21,528 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-07-14 00:56:21,528 DEBUG: 9377 -- Updating config hash for aodh_listener, config_volume=heat_api_cfn hash=5742cec2901444da556613a81d458170", > "2018-07-14 00:56:21,528 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-07-14 00:56:21,529 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-07-14 00:56:21,529 DEBUG: 9377 -- Updating config hash for neutron_api, config_volume=heat_api_cfn hash=62c71abc54f0ccf32a636a87ddce28cd", > "2018-07-14 00:56:21,529 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,529 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-07-14 00:56:21,529 DEBUG: 9377 -- Updating config hash for swift_account_server, config_volume=heat_api_cfn hash=dcbec3b34ef4eefbd74d46f8f2b365c0", > "2018-07-14 00:56:21,529 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-07-14 00:56:21,529 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-07-14 00:56:21,529 DEBUG: 9377 -- Updating config hash for glance_api, config_volume=heat_api_cfn hash=506edd2712cc7cba6de7e8435218538a", > "2018-07-14 00:56:21,529 DEBUG: 9377 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-07-14 00:56:21,529 DEBUG: 9377 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-07-14 00:56:21,529 DEBUG: 9377 -- Updating config hash for logrotate_crond, config_volume=heat_api_cfn hash=1984da984de1bd86f7689c9e9522d41d" > ] >} >2018-07-13 20:56:22,490 p=5867 u=mistral | TASK [Start containers for step 1] ********************************************* >2018-07-13 20:56:22,490 p=5867 u=mistral | Friday 13 July 2018 20:56:22 -0400 (0:00:01.202) 0:09:45.678 *********** >2018-07-13 20:56:23,247 p=5867 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:56:23,269 p=5867 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:56:51,317 p=5867 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:56:51,345 p=5867 u=mistral | TASK [Debug output for task which failed: Start containers for step 1] ********* >2018-07-13 20:56:51,346 p=5867 u=mistral | Friday 13 July 2018 20:56:51 -0400 (0:00:28.855) 0:10:14.534 *********** >2018-07-13 20:56:51,417 p=5867 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-backup ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-backup", > "d02c3bd49e78: Already exists", > "475b0168c252: Already exists", > "98a4cb0b02ef: Already exists", > "1b52dc9b90b4: Already exists", > "d06e8eb825ba: Already exists", > "6a9e8fd22b7d: Pulling fs layer", > "6a9e8fd22b7d: Verifying Checksum", > "6a9e8fd22b7d: Download complete", > "6a9e8fd22b7d: Pull complete", > "Digest: sha256:c790b9ce4b78947c39730c10c8fe43c0c9694fe48f64d43fd98374ae5657af0c", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-07-13.3", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-volume ... ", > "2018-07-13.3: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-volume", > "ba24b685b41c: Pulling fs layer", > "ba24b685b41c: Download complete", > "ba24b685b41c: Pull complete", > "Digest: sha256:030fedd0c5e258f7712d983fcc39e3fa8765d12fe38b1c1d06e318b96cf976a2", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-07-13.3", > "stdout: ", > "stdout: 730e04b2e6bb8b60065c17a338f35114c08d2ee7cc74490cddbb634d94482a14", > "stdout: 1a2889b00494024c8f609164c7d95a673d6aea3093550e4cc028deaf431fc97e", > "stdout: Installing MariaDB/MySQL system tables in '/var/lib/mysql' ...", > "OK", > "Filling help tables...", > "Creating OpenGIS required SP-s...", > "To start mysqld at boot time you have to copy", > "support-files/mysql.server to the right place for your system", > "PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !", > "To do so, start the server, then issue the following commands:", > "'/usr/bin/mysqladmin' -u root password 'new-password'", > "'/usr/bin/mysqladmin' -u root -h controller-0 password 'new-password'", > "Alternatively you can run:", > "'/usr/bin/mysql_secure_installation'", > "which will also give you the option of removing the test", > "databases and anonymous user created by default. This is", > "strongly recommended for production servers.", > "See the MariaDB Knowledgebase at http://mariadb.com/kb or the", > "MySQL manual for more instructions.", > "You can start the MariaDB daemon with:", > "cd '/usr' ; /usr/bin/mysqld_safe --datadir='/var/lib/mysql'", > "You can test the MariaDB daemon with mysql-test-run.pl", > "cd '/usr/mysql-test' ; perl mysql-test-run.pl", > "Please report any problems at http://mariadb.org/jira", > "The latest information about MariaDB is available at http://mariadb.org/.", > "You can find additional information about the MySQL part at:", > "http://dev.mysql.com", > "Consider joining MariaDB's strong and vibrant community:", > "https://mariadb.org/get-involved/", > "180714 00:56:42 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.", > "180714 00:56:42 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql", > "spawn mysql_secure_installation", > "NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB", > " SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!", > "In order to log into MariaDB to secure it, we'll need the current", > "password for the root user. If you've just installed MariaDB, and", > "you haven't set the root password yet, the password will be blank,", > "so you should just press enter here.", > "Enter current password for root (enter for none): ", > "OK, successfully used password, moving on...", > "Setting the root password ensures that nobody can log into the MariaDB", > "root user without the proper authorisation.", > "Set root password? [Y/n] y", > "New password: ", > "Re-enter new password: ", > "Password updated successfully!", > "Reloading privilege tables..", > " ... Success!", > "By default, a MariaDB installation has an anonymous user, allowing anyone", > "to log into MariaDB without having to have a user account created for", > "them. This is intended only for testing, and to make the installation", > "go a bit smoother. You should remove them before moving into a", > "production environment.", > "Remove anonymous users? [Y/n] y", > "Normally, root should only be allowed to connect from 'localhost'. This", > "ensures that someone cannot guess at the root password from the network.", > "Disallow root login remotely? [Y/n] n", > " ... skipping.", > "By default, MariaDB comes with a database named 'test' that anyone can", > "access. This is also intended only for testing, and should be removed", > "before moving into a production environment.", > "Remove test database and access to it? [Y/n] y", > " - Dropping test database...", > " - Removing privileges on test database...", > "Reloading the privilege tables will ensure that all changes made so far", > "will take effect immediately.", > "Reload privilege tables now? [Y/n] y", > "Cleaning up...", > "All done! If you've completed all of the above steps, your MariaDB", > "installation should now be secure.", > "Thanks for using MariaDB!", > "180714 00:56:45 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended", > "180714 00:56:46 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.", > "180714 00:56:46 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql", > "mysqld is alive", > "180714 00:56:49 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended", > "stderr: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", > "INFO:__main__:Validating config file", > "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", > "INFO:__main__:Copying service configuration files", > "INFO:__main__:Copying /dev/null to /etc/libqb/force-filesystem-sockets", > "INFO:__main__:Setting permission for /etc/libqb/force-filesystem-sockets", > "INFO:__main__:Deleting /etc/my.cnf.d/galera.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/my.cnf.d/galera.cnf to /etc/my.cnf.d/galera.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/sysconfig/clustercheck to /etc/sysconfig/clustercheck", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/root/.my.cnf to /root/.my.cnf", > "INFO:__main__:Writing out command to execute", > "2018-07-14 0:56:30 140327274404032 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-07-14 0:56:30 140327274404032 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 42 ...", > "2018-07-14 0:56:34 140368734218432 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-07-14 0:56:34 140368734218432 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 71 ...", > "2018-07-14 0:56:38 139947843365056 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-07-14 0:56:38 139947843365056 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 101 ...", > "/usr/bin/mysqld_safe: line 755: ulimit: -1: invalid option", > "ulimit: usage: ulimit [-SHacdefilmnpqrstuvx] [limit]" > ] >} >2018-07-13 20:56:51,495 p=5867 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-07-13 20:56:51,520 p=5867 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-07-13 20:56:51,546 p=5867 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks1.json exists] ******** >2018-07-13 20:56:51,547 p=5867 u=mistral | Friday 13 July 2018 20:56:51 -0400 (0:00:00.200) 0:10:14.735 *********** >2018-07-13 20:56:52,059 p=5867 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:56:52,087 p=5867 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:56:52,142 p=5867 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-07-13 20:56:52,167 p=5867 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 1] ******************** >2018-07-13 20:56:52,167 p=5867 u=mistral | Friday 13 July 2018 20:56:52 -0400 (0:00:00.620) 0:10:15.355 *********** >2018-07-13 20:56:52,198 p=5867 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:56:52,224 p=5867 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:56:52,237 p=5867 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-07-13 20:56:52,261 p=5867 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (bootstrap tasks) for step 1] *** >2018-07-13 20:56:52,261 p=5867 u=mistral | Friday 13 July 2018 20:56:52 -0400 (0:00:00.094) 0:10:15.449 *********** >2018-07-13 20:56:52,295 p=5867 u=mistral | skipping: [controller-0] => {"skip_reason": "Conditional result was False"} >2018-07-13 20:56:52,363 p=5867 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-07-13 20:56:52,376 p=5867 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-07-13 20:56:52,382 p=5867 u=mistral | PLAY [External deployment step 2] ********************************************** >2018-07-13 20:56:52,405 p=5867 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-07-13 20:56:52,406 p=5867 u=mistral | Friday 13 July 2018 20:56:52 -0400 (0:00:00.144) 0:10:15.594 *********** >2018-07-13 20:56:52,424 p=5867 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:56:52,443 p=5867 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-07-13 20:56:52,443 p=5867 u=mistral | Friday 13 July 2018 20:56:52 -0400 (0:00:00.037) 0:10:15.631 *********** >2018-07-13 20:56:52,469 p=5867 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >2018-07-13 20:56:52,472 p=5867 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >2018-07-13 20:56:52,477 p=5867 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} >2018-07-13 20:56:52,497 p=5867 u=mistral | TASK [generate inventory] ****************************************************** >2018-07-13 20:56:52,498 p=5867 u=mistral | Friday 13 July 2018 20:56:52 -0400 (0:00:00.054) 0:10:15.686 *********** >2018-07-13 20:56:52,518 p=5867 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:56:52,537 p=5867 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-07-13 20:56:52,538 p=5867 u=mistral | Friday 13 July 2018 20:56:52 -0400 (0:00:00.039) 0:10:15.726 *********** >2018-07-13 20:56:52,561 p=5867 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:56:52,580 p=5867 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-07-13 20:56:52,580 p=5867 u=mistral | Friday 13 July 2018 20:56:52 -0400 (0:00:00.042) 0:10:15.768 *********** >2018-07-13 20:56:52,601 p=5867 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:56:52,620 p=5867 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-07-13 20:56:52,620 p=5867 u=mistral | Friday 13 July 2018 20:56:52 -0400 (0:00:00.040) 0:10:15.808 *********** >2018-07-13 20:56:52,639 p=5867 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:56:52,658 p=5867 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-07-13 20:56:52,658 p=5867 u=mistral | Friday 13 July 2018 20:56:52 -0400 (0:00:00.038) 0:10:15.846 *********** >2018-07-13 20:56:52,678 p=5867 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:56:52,699 p=5867 u=mistral | TASK [generate nodes-uuid data file] ******************************************* >2018-07-13 20:56:52,699 p=5867 u=mistral | Friday 13 July 2018 20:56:52 -0400 (0:00:00.040) 0:10:15.887 *********** >2018-07-13 20:56:52,719 p=5867 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:56:52,737 p=5867 u=mistral | TASK [generate nodes-uuid playbook] ******************************************** >2018-07-13 20:56:52,737 p=5867 u=mistral | Friday 13 July 2018 20:56:52 -0400 (0:00:00.038) 0:10:15.925 *********** >2018-07-13 20:56:52,757 p=5867 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-07-13 20:56:52,776 p=5867 u=mistral | TASK [run nodes-uuid] ********************************************************** >2018-07-13 20:56:52,776 p=5867 u=mistral | Friday 13 July 2018 20:56:52 -0400 (0:00:00.039) 0:10:15.964 *********** >2018-07-13 20:56:54,284 p=5867 u=mistral | fatal: [undercloud]: FAILED! => {"changed": true, "cmd": "ANSIBLE_LOG_PATH=\"/var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/nodes_uuid_command.log\" ANSIBLE_SSH_RETRIES=3 ANSIBLE_HOST_KEY_CHECKING=False DEFAULT_FORKS=25 ansible-playbook --private-key /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ssh_private_key -i /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/inventory.yml /var/lib/mistral/dab7ef10-b96d-44c4-a639-4270c8a6d019/ceph-ansible/nodes_uuid_playbook.yml", "delta": "0:00:01.307249", "end": "2018-07-13 20:56:54.254304", "msg": "non-zero return code", "rc": 4, "start": "2018-07-13 20:56:52.947055", "stderr": "", "stderr_lines": [], "stdout": "\nPLAY [all] *********************************************************************\n\nTASK [set nodes data] **********************************************************\nFriday 13 July 2018 20:56:53 -0400 (0:00:00.068) 0:00:00.069 *********** \nok: [compute-0]\nok: [ceph-0]\nok: [controller-0]\n\nTASK [register machine id] *****************************************************\nFriday 13 July 2018 20:56:53 -0400 (0:00:00.066) 0:00:00.135 *********** \nchanged: [ceph-0]\nchanged: [controller-0]\nchanged: [compute-0]\n\nTASK [generate host vars from nodes data] **************************************\nFriday 13 July 2018 20:56:54 -0400 (0:00:00.510) 0:00:00.645 *********** \nfatal: [compute-0]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\". Failed command was: ( umask 77 && mkdir -p \\\"` echo /home/mistral/.ansible/tmp/ansible-tmp-1531529814.17-222746662763579 `\\\" && echo ansible-tmp-1531529814.17-222746662763579=\\\"` echo /home/mistral/.ansible/tmp/ansible-tmp-1531529814.17-222746662763579 `\\\" ), exited with result 1\", \"unreachable\": true}\nfatal: [ceph-0]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\". Failed command was: ( umask 77 && mkdir -p \\\"` echo /home/mistral/.ansible/tmp/ansible-tmp-1531529814.18-201209661451782 `\\\" && echo ansible-tmp-1531529814.18-201209661451782=\\\"` echo /home/mistral/.ansible/tmp/ansible-tmp-1531529814.18-201209661451782 `\\\" ), exited with result 1\", \"unreachable\": true}\nfatal: [controller-0]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\". Failed command was: ( umask 77 && mkdir -p \\\"` echo /home/mistral/.ansible/tmp/ansible-tmp-1531529814.19-31199768599421 `\\\" && echo ansible-tmp-1531529814.19-31199768599421=\\\"` echo /home/mistral/.ansible/tmp/ansible-tmp-1531529814.19-31199768599421 `\\\" ), exited with result 1\", \"unreachable\": true}\n\nPLAY RECAP *********************************************************************\nceph-0 : ok=2 changed=1 unreachable=1 failed=0 \ncompute-0 : ok=2 changed=1 unreachable=1 failed=0 \ncontroller-0 : ok=2 changed=1 unreachable=1 failed=0 \n\nFriday 13 July 2018 20:56:54 -0400 (0:00:00.068) 0:00:00.713 *********** \n=============================================================================== ", "stdout_lines": ["", "PLAY [all] *********************************************************************", "", "TASK [set nodes data] **********************************************************", "Friday 13 July 2018 20:56:53 -0400 (0:00:00.068) 0:00:00.069 *********** ", "ok: [compute-0]", "ok: [ceph-0]", "ok: [controller-0]", "", "TASK [register machine id] *****************************************************", "Friday 13 July 2018 20:56:53 -0400 (0:00:00.066) 0:00:00.135 *********** ", "changed: [ceph-0]", "changed: [controller-0]", "changed: [compute-0]", "", "TASK [generate host vars from nodes data] **************************************", "Friday 13 July 2018 20:56:54 -0400 (0:00:00.510) 0:00:00.645 *********** ", "fatal: [compute-0]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\". Failed command was: ( umask 77 && mkdir -p \\\"` echo /home/mistral/.ansible/tmp/ansible-tmp-1531529814.17-222746662763579 `\\\" && echo ansible-tmp-1531529814.17-222746662763579=\\\"` echo /home/mistral/.ansible/tmp/ansible-tmp-1531529814.17-222746662763579 `\\\" ), exited with result 1\", \"unreachable\": true}", "fatal: [ceph-0]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\". Failed command was: ( umask 77 && mkdir -p \\\"` echo /home/mistral/.ansible/tmp/ansible-tmp-1531529814.18-201209661451782 `\\\" && echo ansible-tmp-1531529814.18-201209661451782=\\\"` echo /home/mistral/.ansible/tmp/ansible-tmp-1531529814.18-201209661451782 `\\\" ), exited with result 1\", \"unreachable\": true}", "fatal: [controller-0]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\". Failed command was: ( umask 77 && mkdir -p \\\"` echo /home/mistral/.ansible/tmp/ansible-tmp-1531529814.19-31199768599421 `\\\" && echo ansible-tmp-1531529814.19-31199768599421=\\\"` echo /home/mistral/.ansible/tmp/ansible-tmp-1531529814.19-31199768599421 `\\\" ), exited with result 1\", \"unreachable\": true}", "", "PLAY RECAP *********************************************************************", "ceph-0 : ok=2 changed=1 unreachable=1 failed=0 ", "compute-0 : ok=2 changed=1 unreachable=1 failed=0 ", "controller-0 : ok=2 changed=1 unreachable=1 failed=0 ", "", "Friday 13 July 2018 20:56:54 -0400 (0:00:00.068) 0:00:00.713 *********** ", "=============================================================================== "]} >2018-07-13 20:56:54,285 p=5867 u=mistral | NO MORE HOSTS LEFT ************************************************************* >2018-07-13 20:56:54,285 p=5867 u=mistral | PLAY RECAP ********************************************************************* >2018-07-13 20:56:54,285 p=5867 u=mistral | ceph-0 : ok=89 changed=42 unreachable=0 failed=0 >2018-07-13 20:56:54,285 p=5867 u=mistral | compute-0 : ok=107 changed=44 unreachable=0 failed=0 >2018-07-13 20:56:54,286 p=5867 u=mistral | controller-0 : ok=147 changed=45 unreachable=0 failed=0 >2018-07-13 20:56:54,286 p=5867 u=mistral | undercloud : ok=19 changed=10 unreachable=0 failed=1 >2018-07-13 20:56:54,286 p=5867 u=mistral | Friday 13 July 2018 20:56:54 -0400 (0:00:01.509) 0:10:17.474 *********** >2018-07-13 20:56:54,286 p=5867 u=mistral | ===============================================================================
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1601382
: 1459086 |
1459088