Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1453680 Details for
Bug 1594169
Unexpected Exception, this is probably a bug: cannot import name to_bytes
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
/var/lib/mistral/xyz/ansible.log
ansible.log (text/plain), 2.31 MB, created by
Filip Hubík
on 2018-06-22 10:13:54 UTC
(
hide
)
Description:
/var/lib/mistral/xyz/ansible.log
Filename:
MIME Type:
Creator:
Filip Hubík
Created:
2018-06-22 10:13:54 UTC
Size:
2.31 MB
patch
obsolete
>2018-06-21 07:17:03,743 p=23396 u=mistral | Using /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ansible.cfg as config file >2018-06-21 07:17:04,370 p=23396 u=mistral | PLAY [Gather facts from undercloud] ******************************************** >2018-06-21 07:17:04,380 p=23396 u=mistral | TASK [Gathering Facts] ********************************************************* >2018-06-21 07:17:05,122 p=23396 u=mistral | ok: [undercloud] >2018-06-21 07:17:05,137 p=23396 u=mistral | PLAY [Gather facts from overcloud] ********************************************* >2018-06-21 07:17:05,145 p=23396 u=mistral | TASK [Gathering Facts] ********************************************************* >2018-06-21 07:17:08,382 p=23396 u=mistral | ok: [compute-0] >2018-06-21 07:17:08,549 p=23396 u=mistral | ok: [ceph-0] >2018-06-21 07:17:08,582 p=23396 u=mistral | ok: [controller-0] >2018-06-21 07:17:08,595 p=23396 u=mistral | PLAY [Load global variables] *************************************************** >2018-06-21 07:17:08,613 p=23396 u=mistral | TASK [include_vars] ************************************************************ >2018-06-21 07:17:08,665 p=23396 u=mistral | ok: [undercloud] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.14,ceph-0.localdomain,ceph-0,172.17.3.14,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.16,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.10,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.10,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.10,ceph-0.external.localdomain,ceph-0.external,192.168.24.10,ceph-0.management.localdomain,ceph-0.management,192.168.24.10,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.21,compute-0.localdomain,compute-0,172.17.3.10,compute-0.storage.localdomain,compute-0.storage,192.168.24.15,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.21,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.10,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.15,compute-0.external.localdomain,compute-0.external,192.168.24.15,compute-0.management.localdomain,compute-0.management,192.168.24.15,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.16,controller-0.localdomain,controller-0,172.17.3.18,controller-0.storage.localdomain,controller-0.storage,172.17.4.17,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.16,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.15,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.104,controller-0.external.localdomain,controller-0.external,192.168.24.8,controller-0.management.localdomain,controller-0.management,192.168.24.8,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/global_vars.yaml"], "changed": false} >2018-06-21 07:17:08,690 p=23396 u=mistral | ok: [controller-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.14,ceph-0.localdomain,ceph-0,172.17.3.14,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.16,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.10,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.10,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.10,ceph-0.external.localdomain,ceph-0.external,192.168.24.10,ceph-0.management.localdomain,ceph-0.management,192.168.24.10,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.21,compute-0.localdomain,compute-0,172.17.3.10,compute-0.storage.localdomain,compute-0.storage,192.168.24.15,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.21,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.10,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.15,compute-0.external.localdomain,compute-0.external,192.168.24.15,compute-0.management.localdomain,compute-0.management,192.168.24.15,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.16,controller-0.localdomain,controller-0,172.17.3.18,controller-0.storage.localdomain,controller-0.storage,172.17.4.17,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.16,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.15,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.104,controller-0.external.localdomain,controller-0.external,192.168.24.8,controller-0.management.localdomain,controller-0.management,192.168.24.8,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/global_vars.yaml"], "changed": false} >2018-06-21 07:17:08,696 p=23396 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.14,ceph-0.localdomain,ceph-0,172.17.3.14,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.16,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.10,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.10,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.10,ceph-0.external.localdomain,ceph-0.external,192.168.24.10,ceph-0.management.localdomain,ceph-0.management,192.168.24.10,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.21,compute-0.localdomain,compute-0,172.17.3.10,compute-0.storage.localdomain,compute-0.storage,192.168.24.15,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.21,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.10,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.15,compute-0.external.localdomain,compute-0.external,192.168.24.15,compute-0.management.localdomain,compute-0.management,192.168.24.15,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.16,controller-0.localdomain,controller-0,172.17.3.18,controller-0.storage.localdomain,controller-0.storage,172.17.4.17,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.16,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.15,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.104,controller-0.external.localdomain,controller-0.external,192.168.24.8,controller-0.management.localdomain,controller-0.management,192.168.24.8,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/global_vars.yaml"], "changed": false} >2018-06-21 07:17:08,721 p=23396 u=mistral | ok: [compute-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.14,ceph-0.localdomain,ceph-0,172.17.3.14,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.16,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.10,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.10,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.10,ceph-0.external.localdomain,ceph-0.external,192.168.24.10,ceph-0.management.localdomain,ceph-0.management,192.168.24.10,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.21,compute-0.localdomain,compute-0,172.17.3.10,compute-0.storage.localdomain,compute-0.storage,192.168.24.15,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.21,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.10,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.15,compute-0.external.localdomain,compute-0.external,192.168.24.15,compute-0.management.localdomain,compute-0.management,192.168.24.15,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.16,controller-0.localdomain,controller-0,172.17.3.18,controller-0.storage.localdomain,controller-0.storage,172.17.4.17,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.16,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.15,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.104,controller-0.external.localdomain,controller-0.external,192.168.24.8,controller-0.management.localdomain,controller-0.management,192.168.24.8,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/global_vars.yaml"], "changed": false} >2018-06-21 07:17:08,728 p=23396 u=mistral | PLAY [Common roles for TripleO servers] **************************************** >2018-06-21 07:17:08,747 p=23396 u=mistral | TASK [tripleo-bootstrap : Deploy required packages to bootstrap TripleO] ******* >2018-06-21 07:17:09,554 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.6.1-0.20180605100743.235e1ae.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-06-21 07:17:09,562 p=23396 u=mistral | ok: [ceph-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.6.1-0.20180605100743.235e1ae.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-06-21 07:17:09,569 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.6.1-0.20180605100743.235e1ae.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-06-21 07:17:09,588 p=23396 u=mistral | TASK [tripleo-bootstrap : Create /var/lib/heat-config/tripleo-config-download directory for deployment data] *** >2018-06-21 07:17:10,049 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:17:10,059 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:17:10,062 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:17:10,080 p=23396 u=mistral | TASK [tripleo-ssh-known-hosts : Template /etc/ssh/ssh_known_hosts] ************* >2018-06-21 07:17:11,034 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "148bcc6d120b10d9c07435ea5fe8ff756e5d665d", "dest": "/etc/ssh/ssh_known_hosts", "gid": 0, "group": "root", "md5sum": "8dde8323939fd6be5fd7f2ec050e19f7", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 1906, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579830.14-126830087787020/source", "state": "file", "uid": 0} >2018-06-21 07:17:11,039 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "148bcc6d120b10d9c07435ea5fe8ff756e5d665d", "dest": "/etc/ssh/ssh_known_hosts", "gid": 0, "group": "root", "md5sum": "8dde8323939fd6be5fd7f2ec050e19f7", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 1906, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579830.11-280383342925797/source", "state": "file", "uid": 0} >2018-06-21 07:17:11,042 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "148bcc6d120b10d9c07435ea5fe8ff756e5d665d", "dest": "/etc/ssh/ssh_known_hosts", "gid": 0, "group": "root", "md5sum": "8dde8323939fd6be5fd7f2ec050e19f7", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 1906, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579830.16-240355875900604/source", "state": "file", "uid": 0} >2018-06-21 07:17:11,049 p=23396 u=mistral | PLAY [Overcloud deploy step tasks for step 0] ********************************** >2018-06-21 07:17:11,072 p=23396 u=mistral | TASK [include_role] ************************************************************ >2018-06-21 07:17:11,099 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:11,123 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:11,135 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:11,156 p=23396 u=mistral | TASK [include_role] ************************************************************ >2018-06-21 07:17:11,183 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:11,207 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:11,218 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:11,239 p=23396 u=mistral | TASK [include_role] ************************************************************ >2018-06-21 07:17:11,265 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:11,288 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:11,299 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:11,319 p=23396 u=mistral | TASK [include_role] ************************************************************ >2018-06-21 07:17:11,344 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:11,368 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:11,382 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:11,404 p=23396 u=mistral | TASK [include_role] ************************************************************ >2018-06-21 07:17:11,430 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:11,452 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:11,465 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:11,471 p=23396 u=mistral | PLAY [Server deployments] ****************************************************** >2018-06-21 07:17:11,493 p=23396 u=mistral | TASK [include] ***************************************************************** >2018-06-21 07:17:11,706 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/Controller/deployments.yaml for controller-0 >2018-06-21 07:17:11,714 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/Controller/deployments.yaml for controller-0 >2018-06-21 07:17:11,722 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/Controller/deployments.yaml for controller-0 >2018-06-21 07:17:11,730 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/Controller/deployments.yaml for controller-0 >2018-06-21 07:17:11,738 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/Controller/deployments.yaml for controller-0 >2018-06-21 07:17:11,746 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/Controller/deployments.yaml for controller-0 >2018-06-21 07:17:11,754 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/Controller/deployments.yaml for controller-0 >2018-06-21 07:17:11,763 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/Controller/deployments.yaml for controller-0 >2018-06-21 07:17:11,786 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:17:11,843 p=23396 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "eb5b74de-ea3d-4884-9a34-a70e159ec7a5"}, "changed": false} >2018-06-21 07:17:11,867 p=23396 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-06-21 07:17:12,485 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "2dc06746e8fe0ff8e6d253693eb73eb03a07050a", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-eb5b74de-ea3d-4884-9a34-a70e159ec7a5", "gid": 0, "group": "root", "md5sum": "80880ce6885e90f4f5ccf7c8a9d0fec1", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 10195, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579831.92-205586271537076/source", "state": "file", "uid": 0} >2018-06-21 07:17:12,508 p=23396 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-06-21 07:17:12,848 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:17:12,877 p=23396 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-06-21 07:17:12,897 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:12,923 p=23396 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-06-21 07:17:12,941 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:12,965 p=23396 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-06-21 07:17:12,983 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:13,005 p=23396 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-06-21 07:17:42,101 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.notify.json)", "delta": "0:00:28.593077", "end": "2018-06-21 07:17:42.488888", "rc": 0, "start": "2018-06-21 07:17:13.895811", "stderr": "[2018-06-21 07:17:13,922] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.json\n[2018-06-21 07:17:42,095] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.18/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.104/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.18/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.104/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/21 07:17:14 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/21 07:17:14 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/21 07:17:14 AM] [INFO] Not using any mapping file.\\n[2018/06/21 07:17:14 AM] [INFO] Finding active nics\\n[2018/06/21 07:17:14 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/21 07:17:14 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/21 07:17:14 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/21 07:17:14 AM] [INFO] lo is not an active nic\\n[2018/06/21 07:17:14 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/21 07:17:14 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/21 07:17:14 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/21 07:17:14 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/21 07:17:14 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/21 07:17:14 AM] [INFO] adding interface: eth0\\n[2018/06/21 07:17:14 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/21 07:17:14 AM] [INFO] adding bridge: br-isolated\\n[2018/06/21 07:17:14 AM] [INFO] adding interface: eth1\\n[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan20\\n[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan30\\n[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan40\\n[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan50\\n[2018/06/21 07:17:14 AM] [INFO] adding bridge: br-ex\\n[2018/06/21 07:17:14 AM] [INFO] adding custom route for interface: br-ex\\n[2018/06/21 07:17:14 AM] [INFO] adding interface: eth2\\n[2018/06/21 07:17:14 AM] [INFO] applying network configs...\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/21 07:17:15 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/21 07:17:15 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/21 07:17:15 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/21 07:17:15 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/21 07:17:15 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/21 07:17:15 AM] [INFO] running ifup on bridge: br-ex\\n[2018/06/21 07:17:19 AM] [INFO] running ifup on interface: eth2\\n[2018/06/21 07:17:19 AM] [INFO] running ifup on interface: eth1\\n[2018/06/21 07:17:20 AM] [INFO] running ifup on interface: eth0\\n[2018/06/21 07:17:24 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/21 07:17:28 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/21 07:17:32 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/21 07:17:36 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-06-21 07:17:42,095] (heat-config) [DEBUG] [2018-06-21 07:17:13,944] (heat-config) [INFO] interface_name=nic1\n[2018-06-21 07:17:13,944] (heat-config) [INFO] bridge_name=br-ex\n[2018-06-21 07:17:13,945] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307\n[2018-06-21 07:17:13,945] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-21 07:17:13,945] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-jqhkwynwtsyb-0-ybim2xtdm545-NetworkDeployment-mmf2k6d2yqmq-TripleOSoftwareDeployment-ktdyhjwhoklp/181a2572-6d7f-4029-a0c5-268d01163402\n[2018-06-21 07:17:13,945] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-21 07:17:13,945] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-21 07:17:13,945] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/eb5b74de-ea3d-4884-9a34-a70e159ec7a5\n[2018-06-21 07:17:42,090] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS\n\n[2018-06-21 07:17:42,091] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.18/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.104/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.18/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.104/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/06/21 07:17:14 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/06/21 07:17:14 AM] [INFO] Ifcfg net config provider created.\n[2018/06/21 07:17:14 AM] [INFO] Not using any mapping file.\n[2018/06/21 07:17:14 AM] [INFO] Finding active nics\n[2018/06/21 07:17:14 AM] [INFO] eth0 is an embedded active nic\n[2018/06/21 07:17:14 AM] [INFO] eth1 is an embedded active nic\n[2018/06/21 07:17:14 AM] [INFO] eth2 is an embedded active nic\n[2018/06/21 07:17:14 AM] [INFO] lo is not an active nic\n[2018/06/21 07:17:14 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/06/21 07:17:14 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/06/21 07:17:14 AM] [INFO] nic3 mapped to: eth2\n[2018/06/21 07:17:14 AM] [INFO] nic2 mapped to: eth1\n[2018/06/21 07:17:14 AM] [INFO] nic1 mapped to: eth0\n[2018/06/21 07:17:14 AM] [INFO] adding interface: eth0\n[2018/06/21 07:17:14 AM] [INFO] adding custom route for interface: eth0\n[2018/06/21 07:17:14 AM] [INFO] adding bridge: br-isolated\n[2018/06/21 07:17:14 AM] [INFO] adding interface: eth1\n[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan20\n[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan30\n[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan40\n[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan50\n[2018/06/21 07:17:14 AM] [INFO] adding bridge: br-ex\n[2018/06/21 07:17:14 AM] [INFO] adding custom route for interface: br-ex\n[2018/06/21 07:17:14 AM] [INFO] adding interface: eth2\n[2018/06/21 07:17:14 AM] [INFO] applying network configs...\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan20\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan40\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan50\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: eth2\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: eth1\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: eth0\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan50\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan20\n[2018/06/21 07:17:15 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/21 07:17:15 AM] [INFO] running ifdown on interface: vlan40\n[2018/06/21 07:17:15 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/06/21 07:17:15 AM] [INFO] running ifdown on bridge: br-ex\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/06/21 07:17:15 AM] [INFO] running ifup on bridge: br-isolated\n[2018/06/21 07:17:15 AM] [INFO] running ifup on bridge: br-ex\n[2018/06/21 07:17:19 AM] [INFO] running ifup on interface: eth2\n[2018/06/21 07:17:19 AM] [INFO] running ifup on interface: eth1\n[2018/06/21 07:17:20 AM] [INFO] running ifup on interface: eth0\n[2018/06/21 07:17:24 AM] [INFO] running ifup on interface: vlan50\n[2018/06/21 07:17:28 AM] [INFO] running ifup on interface: vlan20\n[2018/06/21 07:17:32 AM] [INFO] running ifup on interface: vlan30\n[2018/06/21 07:17:36 AM] [INFO] running ifup on interface: vlan40\n[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan20\n[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan30\n[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan40\n[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan50\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.3\n++ '[' -n 192.168.24.3 ']'\n++ break\n++ echo 192.168.24.3\n+ local METADATA_IP=192.168.24.3\n+ '[' -n 192.168.24.3 ']'\n+ is_local_ip 192.168.24.3\n+ local IP_TO_CHECK=192.168.24.3\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.3/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\n+ _ping=ping\n+ [[ 192.168.24.3 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.3\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-06-21 07:17:42,091] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/eb5b74de-ea3d-4884-9a34-a70e159ec7a5\n\n[2018-06-21 07:17:42,095] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-21 07:17:42,097] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.json < /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.notify.json\n[2018-06-21 07:17:42,481] (heat-config) [INFO] \n[2018-06-21 07:17:42,481] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:17:13,922] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.json", "[2018-06-21 07:17:42,095] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.18/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.104/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.18/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.104/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/21 07:17:14 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/21 07:17:14 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/21 07:17:14 AM] [INFO] Not using any mapping file.\\n[2018/06/21 07:17:14 AM] [INFO] Finding active nics\\n[2018/06/21 07:17:14 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/21 07:17:14 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/21 07:17:14 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/21 07:17:14 AM] [INFO] lo is not an active nic\\n[2018/06/21 07:17:14 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/21 07:17:14 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/21 07:17:14 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/21 07:17:14 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/21 07:17:14 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/21 07:17:14 AM] [INFO] adding interface: eth0\\n[2018/06/21 07:17:14 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/21 07:17:14 AM] [INFO] adding bridge: br-isolated\\n[2018/06/21 07:17:14 AM] [INFO] adding interface: eth1\\n[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan20\\n[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan30\\n[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan40\\n[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan50\\n[2018/06/21 07:17:14 AM] [INFO] adding bridge: br-ex\\n[2018/06/21 07:17:14 AM] [INFO] adding custom route for interface: br-ex\\n[2018/06/21 07:17:14 AM] [INFO] adding interface: eth2\\n[2018/06/21 07:17:14 AM] [INFO] applying network configs...\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/21 07:17:15 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/21 07:17:15 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/21 07:17:15 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/21 07:17:15 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/21 07:17:15 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/21 07:17:15 AM] [INFO] running ifup on bridge: br-ex\\n[2018/06/21 07:17:19 AM] [INFO] running ifup on interface: eth2\\n[2018/06/21 07:17:19 AM] [INFO] running ifup on interface: eth1\\n[2018/06/21 07:17:20 AM] [INFO] running ifup on interface: eth0\\n[2018/06/21 07:17:24 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/21 07:17:28 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/21 07:17:32 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/21 07:17:36 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-06-21 07:17:42,095] (heat-config) [DEBUG] [2018-06-21 07:17:13,944] (heat-config) [INFO] interface_name=nic1", "[2018-06-21 07:17:13,944] (heat-config) [INFO] bridge_name=br-ex", "[2018-06-21 07:17:13,945] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", "[2018-06-21 07:17:13,945] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-21 07:17:13,945] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-jqhkwynwtsyb-0-ybim2xtdm545-NetworkDeployment-mmf2k6d2yqmq-TripleOSoftwareDeployment-ktdyhjwhoklp/181a2572-6d7f-4029-a0c5-268d01163402", "[2018-06-21 07:17:13,945] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-21 07:17:13,945] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-21 07:17:13,945] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/eb5b74de-ea3d-4884-9a34-a70e159ec7a5", "[2018-06-21 07:17:42,090] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", "", "[2018-06-21 07:17:42,091] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.18/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.104/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.18/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.104/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/06/21 07:17:14 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/06/21 07:17:14 AM] [INFO] Ifcfg net config provider created.", "[2018/06/21 07:17:14 AM] [INFO] Not using any mapping file.", "[2018/06/21 07:17:14 AM] [INFO] Finding active nics", "[2018/06/21 07:17:14 AM] [INFO] eth0 is an embedded active nic", "[2018/06/21 07:17:14 AM] [INFO] eth1 is an embedded active nic", "[2018/06/21 07:17:14 AM] [INFO] eth2 is an embedded active nic", "[2018/06/21 07:17:14 AM] [INFO] lo is not an active nic", "[2018/06/21 07:17:14 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/06/21 07:17:14 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/06/21 07:17:14 AM] [INFO] nic3 mapped to: eth2", "[2018/06/21 07:17:14 AM] [INFO] nic2 mapped to: eth1", "[2018/06/21 07:17:14 AM] [INFO] nic1 mapped to: eth0", "[2018/06/21 07:17:14 AM] [INFO] adding interface: eth0", "[2018/06/21 07:17:14 AM] [INFO] adding custom route for interface: eth0", "[2018/06/21 07:17:14 AM] [INFO] adding bridge: br-isolated", "[2018/06/21 07:17:14 AM] [INFO] adding interface: eth1", "[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan20", "[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan30", "[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan40", "[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan50", "[2018/06/21 07:17:14 AM] [INFO] adding bridge: br-ex", "[2018/06/21 07:17:14 AM] [INFO] adding custom route for interface: br-ex", "[2018/06/21 07:17:14 AM] [INFO] adding interface: eth2", "[2018/06/21 07:17:14 AM] [INFO] applying network configs...", "[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan20", "[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan40", "[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan50", "[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: eth2", "[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: eth1", "[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: eth0", "[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan50", "[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan20", "[2018/06/21 07:17:15 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/21 07:17:15 AM] [INFO] running ifdown on interface: vlan40", "[2018/06/21 07:17:15 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/06/21 07:17:15 AM] [INFO] running ifdown on bridge: br-ex", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/06/21 07:17:15 AM] [INFO] running ifup on bridge: br-isolated", "[2018/06/21 07:17:15 AM] [INFO] running ifup on bridge: br-ex", "[2018/06/21 07:17:19 AM] [INFO] running ifup on interface: eth2", "[2018/06/21 07:17:19 AM] [INFO] running ifup on interface: eth1", "[2018/06/21 07:17:20 AM] [INFO] running ifup on interface: eth0", "[2018/06/21 07:17:24 AM] [INFO] running ifup on interface: vlan50", "[2018/06/21 07:17:28 AM] [INFO] running ifup on interface: vlan20", "[2018/06/21 07:17:32 AM] [INFO] running ifup on interface: vlan30", "[2018/06/21 07:17:36 AM] [INFO] running ifup on interface: vlan40", "[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan20", "[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan30", "[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan40", "[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan50", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.3", "++ '[' -n 192.168.24.3 ']'", "++ break", "++ echo 192.168.24.3", "+ local METADATA_IP=192.168.24.3", "+ '[' -n 192.168.24.3 ']'", "+ is_local_ip 192.168.24.3", "+ local IP_TO_CHECK=192.168.24.3", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.3/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", "+ _ping=ping", "+ [[ 192.168.24.3 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.3", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-06-21 07:17:42,091] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/eb5b74de-ea3d-4884-9a34-a70e159ec7a5", "", "[2018-06-21 07:17:42,095] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-21 07:17:42,097] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.json < /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.notify.json", "[2018-06-21 07:17:42,481] (heat-config) [INFO] ", "[2018-06-21 07:17:42,481] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:17:42,127 p=23396 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-06-21 07:17:42,189 p=23396 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:17:13,922] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.json", > "[2018-06-21 07:17:42,095] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.18/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.104/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.18/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.104/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/21 07:17:14 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/21 07:17:14 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/21 07:17:14 AM] [INFO] Not using any mapping file.\\n[2018/06/21 07:17:14 AM] [INFO] Finding active nics\\n[2018/06/21 07:17:14 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/21 07:17:14 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/21 07:17:14 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/21 07:17:14 AM] [INFO] lo is not an active nic\\n[2018/06/21 07:17:14 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/21 07:17:14 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/21 07:17:14 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/21 07:17:14 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/21 07:17:14 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/21 07:17:14 AM] [INFO] adding interface: eth0\\n[2018/06/21 07:17:14 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/21 07:17:14 AM] [INFO] adding bridge: br-isolated\\n[2018/06/21 07:17:14 AM] [INFO] adding interface: eth1\\n[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan20\\n[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan30\\n[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan40\\n[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan50\\n[2018/06/21 07:17:14 AM] [INFO] adding bridge: br-ex\\n[2018/06/21 07:17:14 AM] [INFO] adding custom route for interface: br-ex\\n[2018/06/21 07:17:14 AM] [INFO] adding interface: eth2\\n[2018/06/21 07:17:14 AM] [INFO] applying network configs...\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/21 07:17:15 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/21 07:17:15 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/21 07:17:15 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/21 07:17:15 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/21 07:17:15 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/21 07:17:15 AM] [INFO] running ifup on bridge: br-ex\\n[2018/06/21 07:17:19 AM] [INFO] running ifup on interface: eth2\\n[2018/06/21 07:17:19 AM] [INFO] running ifup on interface: eth1\\n[2018/06/21 07:17:20 AM] [INFO] running ifup on interface: eth0\\n[2018/06/21 07:17:24 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/21 07:17:28 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/21 07:17:32 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/21 07:17:36 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-06-21 07:17:42,095] (heat-config) [DEBUG] [2018-06-21 07:17:13,944] (heat-config) [INFO] interface_name=nic1", > "[2018-06-21 07:17:13,944] (heat-config) [INFO] bridge_name=br-ex", > "[2018-06-21 07:17:13,945] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", > "[2018-06-21 07:17:13,945] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-21 07:17:13,945] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-jqhkwynwtsyb-0-ybim2xtdm545-NetworkDeployment-mmf2k6d2yqmq-TripleOSoftwareDeployment-ktdyhjwhoklp/181a2572-6d7f-4029-a0c5-268d01163402", > "[2018-06-21 07:17:13,945] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-21 07:17:13,945] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-21 07:17:13,945] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/eb5b74de-ea3d-4884-9a34-a70e159ec7a5", > "[2018-06-21 07:17:42,090] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", > "", > "[2018-06-21 07:17:42,091] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.18/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.104/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.18/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.104/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/06/21 07:17:14 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/06/21 07:17:14 AM] [INFO] Ifcfg net config provider created.", > "[2018/06/21 07:17:14 AM] [INFO] Not using any mapping file.", > "[2018/06/21 07:17:14 AM] [INFO] Finding active nics", > "[2018/06/21 07:17:14 AM] [INFO] eth0 is an embedded active nic", > "[2018/06/21 07:17:14 AM] [INFO] eth1 is an embedded active nic", > "[2018/06/21 07:17:14 AM] [INFO] eth2 is an embedded active nic", > "[2018/06/21 07:17:14 AM] [INFO] lo is not an active nic", > "[2018/06/21 07:17:14 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/06/21 07:17:14 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/06/21 07:17:14 AM] [INFO] nic3 mapped to: eth2", > "[2018/06/21 07:17:14 AM] [INFO] nic2 mapped to: eth1", > "[2018/06/21 07:17:14 AM] [INFO] nic1 mapped to: eth0", > "[2018/06/21 07:17:14 AM] [INFO] adding interface: eth0", > "[2018/06/21 07:17:14 AM] [INFO] adding custom route for interface: eth0", > "[2018/06/21 07:17:14 AM] [INFO] adding bridge: br-isolated", > "[2018/06/21 07:17:14 AM] [INFO] adding interface: eth1", > "[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan20", > "[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan30", > "[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan40", > "[2018/06/21 07:17:14 AM] [INFO] adding vlan: vlan50", > "[2018/06/21 07:17:14 AM] [INFO] adding bridge: br-ex", > "[2018/06/21 07:17:14 AM] [INFO] adding custom route for interface: br-ex", > "[2018/06/21 07:17:14 AM] [INFO] adding interface: eth2", > "[2018/06/21 07:17:14 AM] [INFO] applying network configs...", > "[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan20", > "[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan40", > "[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan50", > "[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: eth2", > "[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: eth1", > "[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: eth0", > "[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan50", > "[2018/06/21 07:17:14 AM] [INFO] running ifdown on interface: vlan20", > "[2018/06/21 07:17:15 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/21 07:17:15 AM] [INFO] running ifdown on interface: vlan40", > "[2018/06/21 07:17:15 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/06/21 07:17:15 AM] [INFO] running ifdown on bridge: br-ex", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/06/21 07:17:15 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/06/21 07:17:15 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/06/21 07:17:15 AM] [INFO] running ifup on bridge: br-ex", > "[2018/06/21 07:17:19 AM] [INFO] running ifup on interface: eth2", > "[2018/06/21 07:17:19 AM] [INFO] running ifup on interface: eth1", > "[2018/06/21 07:17:20 AM] [INFO] running ifup on interface: eth0", > "[2018/06/21 07:17:24 AM] [INFO] running ifup on interface: vlan50", > "[2018/06/21 07:17:28 AM] [INFO] running ifup on interface: vlan20", > "[2018/06/21 07:17:32 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/21 07:17:36 AM] [INFO] running ifup on interface: vlan40", > "[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan20", > "[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan40", > "[2018/06/21 07:17:41 AM] [INFO] running ifup on interface: vlan50", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.3", > "++ '[' -n 192.168.24.3 ']'", > "++ break", > "++ echo 192.168.24.3", > "+ local METADATA_IP=192.168.24.3", > "+ '[' -n 192.168.24.3 ']'", > "+ is_local_ip 192.168.24.3", > "+ local IP_TO_CHECK=192.168.24.3", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.3/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", > "+ _ping=ping", > "+ [[ 192.168.24.3 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.3", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-06-21 07:17:42,091] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/eb5b74de-ea3d-4884-9a34-a70e159ec7a5", > "", > "[2018-06-21 07:17:42,095] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-21 07:17:42,097] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.json < /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.notify.json", > "[2018-06-21 07:17:42,481] (heat-config) [INFO] ", > "[2018-06-21 07:17:42,481] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:17:42,217 p=23396 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment] ************************* >2018-06-21 07:17:42,234 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:42,259 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:17:42,309 p=23396 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "133f50ba-6071-42f7-9ef0-8985c2e1c247"}, "changed": false} >2018-06-21 07:17:42,334 p=23396 u=mistral | TASK [Render deployment file for ControllerUpgradeInitDeployment] ************** >2018-06-21 07:17:42,970 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "7c3fd82d078a69fa0d51f62eeacf9eebeb4297b5", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerUpgradeInitDeployment-133f50ba-6071-42f7-9ef0-8985c2e1c247", "gid": 0, "group": "root", "md5sum": "5ac28a00744b34d5d1dd2b66edb2d4a5", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1183, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579862.39-45907637516441/source", "state": "file", "uid": 0} >2018-06-21 07:17:42,995 p=23396 u=mistral | TASK [Check if deployed file exists for ControllerUpgradeInitDeployment] ******* >2018-06-21 07:17:43,329 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:17:43,351 p=23396 u=mistral | TASK [Check previous deployment rc for ControllerUpgradeInitDeployment] ******** >2018-06-21 07:17:43,367 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:43,389 p=23396 u=mistral | TASK [Remove deployed file for ControllerUpgradeInitDeployment when previous deployment failed] *** >2018-06-21 07:17:43,405 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:43,427 p=23396 u=mistral | TASK [Force remove deployed file for ControllerUpgradeInitDeployment] ********** >2018-06-21 07:17:43,442 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:43,463 p=23396 u=mistral | TASK [Run deployment ControllerUpgradeInitDeployment] ************************** >2018-06-21 07:17:44,267 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.notify.json)", "delta": "0:00:00.467029", "end": "2018-06-21 07:17:44.677907", "rc": 0, "start": "2018-06-21 07:17:44.210878", "stderr": "[2018-06-21 07:17:44,239] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.json\n[2018-06-21 07:17:44,267] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-21 07:17:44,268] (heat-config) [DEBUG] [2018-06-21 07:17:44,260] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307\n[2018-06-21 07:17:44,260] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-21 07:17:44,260] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-jqhkwynwtsyb-0-ybim2xtdm545-ControllerUpgradeInitDeployment-42lxkwjegpya/70a0c93b-86c4-41bc-b021-345deed4f629\n[2018-06-21 07:17:44,260] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-21 07:17:44,260] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-21 07:17:44,260] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/133f50ba-6071-42f7-9ef0-8985c2e1c247\n[2018-06-21 07:17:44,264] (heat-config) [INFO] \n[2018-06-21 07:17:44,264] (heat-config) [DEBUG] \n[2018-06-21 07:17:44,265] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/133f50ba-6071-42f7-9ef0-8985c2e1c247\n\n[2018-06-21 07:17:44,268] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-21 07:17:44,268] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.json < /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.notify.json\n[2018-06-21 07:17:44,671] (heat-config) [INFO] \n[2018-06-21 07:17:44,671] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:17:44,239] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.json", "[2018-06-21 07:17:44,267] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-21 07:17:44,268] (heat-config) [DEBUG] [2018-06-21 07:17:44,260] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", "[2018-06-21 07:17:44,260] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-21 07:17:44,260] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-jqhkwynwtsyb-0-ybim2xtdm545-ControllerUpgradeInitDeployment-42lxkwjegpya/70a0c93b-86c4-41bc-b021-345deed4f629", "[2018-06-21 07:17:44,260] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-21 07:17:44,260] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-21 07:17:44,260] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/133f50ba-6071-42f7-9ef0-8985c2e1c247", "[2018-06-21 07:17:44,264] (heat-config) [INFO] ", "[2018-06-21 07:17:44,264] (heat-config) [DEBUG] ", "[2018-06-21 07:17:44,265] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/133f50ba-6071-42f7-9ef0-8985c2e1c247", "", "[2018-06-21 07:17:44,268] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-21 07:17:44,268] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.json < /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.notify.json", "[2018-06-21 07:17:44,671] (heat-config) [INFO] ", "[2018-06-21 07:17:44,671] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:17:44,292 p=23396 u=mistral | TASK [Output for ControllerUpgradeInitDeployment] ****************************** >2018-06-21 07:17:44,342 p=23396 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:17:44,239] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.json", > "[2018-06-21 07:17:44,267] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-21 07:17:44,268] (heat-config) [DEBUG] [2018-06-21 07:17:44,260] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", > "[2018-06-21 07:17:44,260] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-21 07:17:44,260] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-jqhkwynwtsyb-0-ybim2xtdm545-ControllerUpgradeInitDeployment-42lxkwjegpya/70a0c93b-86c4-41bc-b021-345deed4f629", > "[2018-06-21 07:17:44,260] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-21 07:17:44,260] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-21 07:17:44,260] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/133f50ba-6071-42f7-9ef0-8985c2e1c247", > "[2018-06-21 07:17:44,264] (heat-config) [INFO] ", > "[2018-06-21 07:17:44,264] (heat-config) [DEBUG] ", > "[2018-06-21 07:17:44,265] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/133f50ba-6071-42f7-9ef0-8985c2e1c247", > "", > "[2018-06-21 07:17:44,268] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-21 07:17:44,268] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.json < /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.notify.json", > "[2018-06-21 07:17:44,671] (heat-config) [INFO] ", > "[2018-06-21 07:17:44,671] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:17:44,365 p=23396 u=mistral | TASK [Check-mode for Run deployment ControllerUpgradeInitDeployment] *********** >2018-06-21 07:17:44,381 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:44,403 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:17:44,771 p=23396 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "ffdcae20-c091-4aa8-8d28-eb9b622150e8"}, "changed": false} >2018-06-21 07:17:44,795 p=23396 u=mistral | TASK [Render deployment file for ControllerDeployment] ************************* >2018-06-21 07:17:45,794 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "f1896696c26732b714fb0085f7c4a4f8c217ce07", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerDeployment-ffdcae20-c091-4aa8-8d28-eb9b622150e8", "gid": 0, "group": "root", "md5sum": "62803e48ee4903028e3ab4f7586d85bb", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 73456, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579865.2-111682020116466/source", "state": "file", "uid": 0} >2018-06-21 07:17:45,817 p=23396 u=mistral | TASK [Check if deployed file exists for ControllerDeployment] ****************** >2018-06-21 07:17:46,153 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:17:46,176 p=23396 u=mistral | TASK [Check previous deployment rc for ControllerDeployment] ******************* >2018-06-21 07:17:46,195 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:46,217 p=23396 u=mistral | TASK [Remove deployed file for ControllerDeployment when previous deployment failed] *** >2018-06-21 07:17:46,234 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:46,255 p=23396 u=mistral | TASK [Force remove deployed file for ControllerDeployment] ********************* >2018-06-21 07:17:46,270 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:46,292 p=23396 u=mistral | TASK [Run deployment ControllerDeployment] ************************************* >2018-06-21 07:17:47,172 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/ffdcae20-c091-4aa8-8d28-eb9b622150e8.notify.json)", "delta": "0:00:00.546261", "end": "2018-06-21 07:17:47.580840", "rc": 0, "start": "2018-06-21 07:17:47.034579", "stderr": "[2018-06-21 07:17:47,064] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/ffdcae20-c091-4aa8-8d28-eb9b622150e8.json\n[2018-06-21 07:17:47,182] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-21 07:17:47,182] (heat-config) [DEBUG] \n[2018-06-21 07:17:47,182] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-21 07:17:47,183] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ffdcae20-c091-4aa8-8d28-eb9b622150e8.json < /var/lib/heat-config/deployed/ffdcae20-c091-4aa8-8d28-eb9b622150e8.notify.json\n[2018-06-21 07:17:47,573] (heat-config) [INFO] \n[2018-06-21 07:17:47,573] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:17:47,064] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/ffdcae20-c091-4aa8-8d28-eb9b622150e8.json", "[2018-06-21 07:17:47,182] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-21 07:17:47,182] (heat-config) [DEBUG] ", "[2018-06-21 07:17:47,182] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-21 07:17:47,183] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ffdcae20-c091-4aa8-8d28-eb9b622150e8.json < /var/lib/heat-config/deployed/ffdcae20-c091-4aa8-8d28-eb9b622150e8.notify.json", "[2018-06-21 07:17:47,573] (heat-config) [INFO] ", "[2018-06-21 07:17:47,573] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:17:47,195 p=23396 u=mistral | TASK [Output for ControllerDeployment] ***************************************** >2018-06-21 07:17:47,282 p=23396 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:17:47,064] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/ffdcae20-c091-4aa8-8d28-eb9b622150e8.json", > "[2018-06-21 07:17:47,182] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-21 07:17:47,182] (heat-config) [DEBUG] ", > "[2018-06-21 07:17:47,182] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-21 07:17:47,183] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ffdcae20-c091-4aa8-8d28-eb9b622150e8.json < /var/lib/heat-config/deployed/ffdcae20-c091-4aa8-8d28-eb9b622150e8.notify.json", > "[2018-06-21 07:17:47,573] (heat-config) [INFO] ", > "[2018-06-21 07:17:47,573] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:17:47,305 p=23396 u=mistral | TASK [Check-mode for Run deployment ControllerDeployment] ********************** >2018-06-21 07:17:47,320 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:47,341 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:17:47,439 p=23396 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "bf6fa48b-3a96-4cd5-a95c-e5254649671f"}, "changed": false} >2018-06-21 07:17:47,462 p=23396 u=mistral | TASK [Render deployment file for ControllerHostsDeployment] ******************** >2018-06-21 07:17:48,066 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "fb4d4f009e5f5f5ff1b1c65d3446b7a69e6a61a6", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerHostsDeployment-bf6fa48b-3a96-4cd5-a95c-e5254649671f", "gid": 0, "group": "root", "md5sum": "73ffeffc16a11044bd23dd7fe5242237", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4085, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579867.52-49229049967700/source", "state": "file", "uid": 0} >2018-06-21 07:17:48,089 p=23396 u=mistral | TASK [Check if deployed file exists for ControllerHostsDeployment] ************* >2018-06-21 07:17:48,470 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:17:48,495 p=23396 u=mistral | TASK [Check previous deployment rc for ControllerHostsDeployment] ************** >2018-06-21 07:17:48,512 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:48,536 p=23396 u=mistral | TASK [Remove deployed file for ControllerHostsDeployment when previous deployment failed] *** >2018-06-21 07:17:48,554 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:48,576 p=23396 u=mistral | TASK [Force remove deployed file for ControllerHostsDeployment] **************** >2018-06-21 07:17:48,593 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:48,615 p=23396 u=mistral | TASK [Run deployment ControllerHostsDeployment] ******************************** >2018-06-21 07:17:49,491 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.notify.json)", "delta": "0:00:00.464030", "end": "2018-06-21 07:17:49.873974", "rc": 0, "start": "2018-06-21 07:17:49.409944", "stderr": "[2018-06-21 07:17:49,433] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.json\n[2018-06-21 07:17:49,469] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-06-21 07:17:49,469] (heat-config) [DEBUG] [2018-06-21 07:17:49,453] (heat-config) [INFO] hosts=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-06-21 07:17:49,453] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307\n[2018-06-21 07:17:49,453] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-21 07:17:49,453] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-ktx2tirk4lao-0-luttmm6aujy7/ceb8fc96-fcec-460e-841f-0869d4795085\n[2018-06-21 07:17:49,453] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-21 07:17:49,453] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-21 07:17:49,453] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/bf6fa48b-3a96-4cd5-a95c-e5254649671f\n[2018-06-21 07:17:49,465] (heat-config) [INFO] \n[2018-06-21 07:17:49,465] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-06-21 07:17:49,466] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/bf6fa48b-3a96-4cd5-a95c-e5254649671f\n\n[2018-06-21 07:17:49,469] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-21 07:17:49,470] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.json < /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.notify.json\n[2018-06-21 07:17:49,867] (heat-config) [INFO] \n[2018-06-21 07:17:49,867] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:17:49,433] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.json", "[2018-06-21 07:17:49,469] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-06-21 07:17:49,469] (heat-config) [DEBUG] [2018-06-21 07:17:49,453] (heat-config) [INFO] hosts=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-06-21 07:17:49,453] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", "[2018-06-21 07:17:49,453] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-21 07:17:49,453] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-ktx2tirk4lao-0-luttmm6aujy7/ceb8fc96-fcec-460e-841f-0869d4795085", "[2018-06-21 07:17:49,453] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-21 07:17:49,453] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-21 07:17:49,453] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/bf6fa48b-3a96-4cd5-a95c-e5254649671f", "[2018-06-21 07:17:49,465] (heat-config) [INFO] ", "[2018-06-21 07:17:49,465] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-06-21 07:17:49,466] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/bf6fa48b-3a96-4cd5-a95c-e5254649671f", "", "[2018-06-21 07:17:49,469] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-21 07:17:49,470] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.json < /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.notify.json", "[2018-06-21 07:17:49,867] (heat-config) [INFO] ", "[2018-06-21 07:17:49,867] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:17:49,521 p=23396 u=mistral | TASK [Output for ControllerHostsDeployment] ************************************ >2018-06-21 07:17:49,642 p=23396 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:17:49,433] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.json", > "[2018-06-21 07:17:49,469] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-06-21 07:17:49,469] (heat-config) [DEBUG] [2018-06-21 07:17:49,453] (heat-config) [INFO] hosts=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-06-21 07:17:49,453] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", > "[2018-06-21 07:17:49,453] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-21 07:17:49,453] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-ktx2tirk4lao-0-luttmm6aujy7/ceb8fc96-fcec-460e-841f-0869d4795085", > "[2018-06-21 07:17:49,453] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-21 07:17:49,453] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-21 07:17:49,453] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/bf6fa48b-3a96-4cd5-a95c-e5254649671f", > "[2018-06-21 07:17:49,465] (heat-config) [INFO] ", > "[2018-06-21 07:17:49,465] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-06-21 07:17:49,466] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/bf6fa48b-3a96-4cd5-a95c-e5254649671f", > "", > "[2018-06-21 07:17:49,469] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-21 07:17:49,470] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.json < /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.notify.json", > "[2018-06-21 07:17:49,867] (heat-config) [INFO] ", > "[2018-06-21 07:17:49,867] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:17:49,674 p=23396 u=mistral | TASK [Check-mode for Run deployment ControllerHostsDeployment] ***************** >2018-06-21 07:17:49,689 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:49,709 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:17:49,893 p=23396 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "4dcbe293-4462-47e3-8066-92bd73e4b71a"}, "changed": false} >2018-06-21 07:17:49,916 p=23396 u=mistral | TASK [Render deployment file for ControllerAllNodesDeployment] ***************** >2018-06-21 07:17:50,686 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "151c09e17f655e4c07639577ffaa1c4138978373", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerAllNodesDeployment-4dcbe293-4462-47e3-8066-92bd73e4b71a", "gid": 0, "group": "root", "md5sum": "6f2b758d383d00579c4feb57aca3d5dd", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19032, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579870.1-18429994359955/source", "state": "file", "uid": 0} >2018-06-21 07:17:50,710 p=23396 u=mistral | TASK [Check if deployed file exists for ControllerAllNodesDeployment] ********** >2018-06-21 07:17:51,053 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:17:51,078 p=23396 u=mistral | TASK [Check previous deployment rc for ControllerAllNodesDeployment] *********** >2018-06-21 07:17:51,096 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:51,120 p=23396 u=mistral | TASK [Remove deployed file for ControllerAllNodesDeployment when previous deployment failed] *** >2018-06-21 07:17:51,137 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:51,159 p=23396 u=mistral | TASK [Force remove deployed file for ControllerAllNodesDeployment] ************* >2018-06-21 07:17:51,174 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:51,196 p=23396 u=mistral | TASK [Run deployment ControllerAllNodesDeployment] ***************************** >2018-06-21 07:17:52,098 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/4dcbe293-4462-47e3-8066-92bd73e4b71a.notify.json)", "delta": "0:00:00.558465", "end": "2018-06-21 07:17:52.505481", "rc": 0, "start": "2018-06-21 07:17:51.947016", "stderr": "[2018-06-21 07:17:51,972] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/4dcbe293-4462-47e3-8066-92bd73e4b71a.json\n[2018-06-21 07:17:52,087] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-21 07:17:52,087] (heat-config) [DEBUG] \n[2018-06-21 07:17:52,087] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-21 07:17:52,088] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/4dcbe293-4462-47e3-8066-92bd73e4b71a.json < /var/lib/heat-config/deployed/4dcbe293-4462-47e3-8066-92bd73e4b71a.notify.json\n[2018-06-21 07:17:52,498] (heat-config) [INFO] \n[2018-06-21 07:17:52,498] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:17:51,972] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/4dcbe293-4462-47e3-8066-92bd73e4b71a.json", "[2018-06-21 07:17:52,087] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-21 07:17:52,087] (heat-config) [DEBUG] ", "[2018-06-21 07:17:52,087] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-21 07:17:52,088] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/4dcbe293-4462-47e3-8066-92bd73e4b71a.json < /var/lib/heat-config/deployed/4dcbe293-4462-47e3-8066-92bd73e4b71a.notify.json", "[2018-06-21 07:17:52,498] (heat-config) [INFO] ", "[2018-06-21 07:17:52,498] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:17:52,120 p=23396 u=mistral | TASK [Output for ControllerAllNodesDeployment] ********************************* >2018-06-21 07:17:52,164 p=23396 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:17:51,972] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/4dcbe293-4462-47e3-8066-92bd73e4b71a.json", > "[2018-06-21 07:17:52,087] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-21 07:17:52,087] (heat-config) [DEBUG] ", > "[2018-06-21 07:17:52,087] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-21 07:17:52,088] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/4dcbe293-4462-47e3-8066-92bd73e4b71a.json < /var/lib/heat-config/deployed/4dcbe293-4462-47e3-8066-92bd73e4b71a.notify.json", > "[2018-06-21 07:17:52,498] (heat-config) [INFO] ", > "[2018-06-21 07:17:52,498] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:17:52,186 p=23396 u=mistral | TASK [Check-mode for Run deployment ControllerAllNodesDeployment] ************** >2018-06-21 07:17:52,199 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:52,220 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:17:52,273 p=23396 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "7b44e291-6842-4e34-b4b9-8ff041f059e6"}, "changed": false} >2018-06-21 07:17:52,296 p=23396 u=mistral | TASK [Render deployment file for ControllerAllNodesValidationDeployment] ******* >2018-06-21 07:17:52,917 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "45e953e264b7bed00914cc8acef6b02862222daa", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerAllNodesValidationDeployment-7b44e291-6842-4e34-b4b9-8ff041f059e6", "gid": 0, "group": "root", "md5sum": "c7eec63068c96007f11b5f787036053a", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4940, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579872.35-171860592099416/source", "state": "file", "uid": 0} >2018-06-21 07:17:52,942 p=23396 u=mistral | TASK [Check if deployed file exists for ControllerAllNodesValidationDeployment] *** >2018-06-21 07:17:53,279 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:17:53,303 p=23396 u=mistral | TASK [Check previous deployment rc for ControllerAllNodesValidationDeployment] *** >2018-06-21 07:17:53,320 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:53,342 p=23396 u=mistral | TASK [Remove deployed file for ControllerAllNodesValidationDeployment when previous deployment failed] *** >2018-06-21 07:17:53,358 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:53,381 p=23396 u=mistral | TASK [Force remove deployed file for ControllerAllNodesValidationDeployment] *** >2018-06-21 07:17:53,397 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:53,418 p=23396 u=mistral | TASK [Run deployment ControllerAllNodesValidationDeployment] ******************* >2018-06-21 07:17:54,929 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.notify.json)", "delta": "0:00:01.165372", "end": "2018-06-21 07:17:55.335664", "rc": 0, "start": "2018-06-21 07:17:54.170292", "stderr": "[2018-06-21 07:17:54,192] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.json\n[2018-06-21 07:17:54,920] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\\nPing to 10.0.0.104 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.16 for local network 172.17.1.0/24.\\nPing to 172.17.1.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\\nPing to 172.17.2.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\\nPing to 172.17.3.18 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.17 for local network 172.17.4.0/24.\\nPing to 172.17.4.17 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\\nPing to 192.168.24.8 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-21 07:17:54,920] (heat-config) [DEBUG] [2018-06-21 07:17:54,213] (heat-config) [INFO] ping_test_ips=172.17.3.18 172.17.4.17 172.17.1.16 172.17.2.15 10.0.0.104 192.168.24.8\n[2018-06-21 07:17:54,213] (heat-config) [INFO] validate_fqdn=False\n[2018-06-21 07:17:54,213] (heat-config) [INFO] validate_ntp=True\n[2018-06-21 07:17:54,213] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307\n[2018-06-21 07:17:54,213] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-21 07:17:54,213] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-dkkb7eagalme-0-rq7gh364aglr/f7a544e2-8dcb-457c-9107-92464db5616d\n[2018-06-21 07:17:54,213] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-21 07:17:54,213] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-21 07:17:54,213] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/7b44e291-6842-4e34-b4b9-8ff041f059e6\n[2018-06-21 07:17:54,916] (heat-config) [INFO] Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\nPing to 10.0.0.104 succeeded.\nSUCCESS\nTrying to ping 172.17.1.16 for local network 172.17.1.0/24.\nPing to 172.17.1.16 succeeded.\nSUCCESS\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\nPing to 172.17.2.15 succeeded.\nSUCCESS\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\nPing to 172.17.3.18 succeeded.\nSUCCESS\nTrying to ping 172.17.4.17 for local network 172.17.4.0/24.\nPing to 172.17.4.17 succeeded.\nSUCCESS\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\nPing to 192.168.24.8 succeeded.\nSUCCESS\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\nSUCCESS\n\n[2018-06-21 07:17:54,916] (heat-config) [DEBUG] \n[2018-06-21 07:17:54,916] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/7b44e291-6842-4e34-b4b9-8ff041f059e6\n\n[2018-06-21 07:17:54,920] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-21 07:17:54,920] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.json < /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.notify.json\n[2018-06-21 07:17:55,329] (heat-config) [INFO] \n[2018-06-21 07:17:55,330] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:17:54,192] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.json", "[2018-06-21 07:17:54,920] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\\nPing to 10.0.0.104 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.16 for local network 172.17.1.0/24.\\nPing to 172.17.1.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\\nPing to 172.17.2.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\\nPing to 172.17.3.18 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.17 for local network 172.17.4.0/24.\\nPing to 172.17.4.17 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\\nPing to 192.168.24.8 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-21 07:17:54,920] (heat-config) [DEBUG] [2018-06-21 07:17:54,213] (heat-config) [INFO] ping_test_ips=172.17.3.18 172.17.4.17 172.17.1.16 172.17.2.15 10.0.0.104 192.168.24.8", "[2018-06-21 07:17:54,213] (heat-config) [INFO] validate_fqdn=False", "[2018-06-21 07:17:54,213] (heat-config) [INFO] validate_ntp=True", "[2018-06-21 07:17:54,213] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", "[2018-06-21 07:17:54,213] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-21 07:17:54,213] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-dkkb7eagalme-0-rq7gh364aglr/f7a544e2-8dcb-457c-9107-92464db5616d", "[2018-06-21 07:17:54,213] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-21 07:17:54,213] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-21 07:17:54,213] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/7b44e291-6842-4e34-b4b9-8ff041f059e6", "[2018-06-21 07:17:54,916] (heat-config) [INFO] Trying to ping 10.0.0.104 for local network 10.0.0.0/24.", "Ping to 10.0.0.104 succeeded.", "SUCCESS", "Trying to ping 172.17.1.16 for local network 172.17.1.0/24.", "Ping to 172.17.1.16 succeeded.", "SUCCESS", "Trying to ping 172.17.2.15 for local network 172.17.2.0/24.", "Ping to 172.17.2.15 succeeded.", "SUCCESS", "Trying to ping 172.17.3.18 for local network 172.17.3.0/24.", "Ping to 172.17.3.18 succeeded.", "SUCCESS", "Trying to ping 172.17.4.17 for local network 172.17.4.0/24.", "Ping to 172.17.4.17 succeeded.", "SUCCESS", "Trying to ping 192.168.24.8 for local network 192.168.24.0/24.", "Ping to 192.168.24.8 succeeded.", "SUCCESS", "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", "SUCCESS", "", "[2018-06-21 07:17:54,916] (heat-config) [DEBUG] ", "[2018-06-21 07:17:54,916] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/7b44e291-6842-4e34-b4b9-8ff041f059e6", "", "[2018-06-21 07:17:54,920] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-21 07:17:54,920] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.json < /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.notify.json", "[2018-06-21 07:17:55,329] (heat-config) [INFO] ", "[2018-06-21 07:17:55,330] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:17:54,952 p=23396 u=mistral | TASK [Output for ControllerAllNodesValidationDeployment] *********************** >2018-06-21 07:17:55,000 p=23396 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:17:54,192] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.json", > "[2018-06-21 07:17:54,920] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\\nPing to 10.0.0.104 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.16 for local network 172.17.1.0/24.\\nPing to 172.17.1.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\\nPing to 172.17.2.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\\nPing to 172.17.3.18 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.17 for local network 172.17.4.0/24.\\nPing to 172.17.4.17 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\\nPing to 192.168.24.8 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-21 07:17:54,920] (heat-config) [DEBUG] [2018-06-21 07:17:54,213] (heat-config) [INFO] ping_test_ips=172.17.3.18 172.17.4.17 172.17.1.16 172.17.2.15 10.0.0.104 192.168.24.8", > "[2018-06-21 07:17:54,213] (heat-config) [INFO] validate_fqdn=False", > "[2018-06-21 07:17:54,213] (heat-config) [INFO] validate_ntp=True", > "[2018-06-21 07:17:54,213] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", > "[2018-06-21 07:17:54,213] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-21 07:17:54,213] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-dkkb7eagalme-0-rq7gh364aglr/f7a544e2-8dcb-457c-9107-92464db5616d", > "[2018-06-21 07:17:54,213] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-21 07:17:54,213] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-21 07:17:54,213] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/7b44e291-6842-4e34-b4b9-8ff041f059e6", > "[2018-06-21 07:17:54,916] (heat-config) [INFO] Trying to ping 10.0.0.104 for local network 10.0.0.0/24.", > "Ping to 10.0.0.104 succeeded.", > "SUCCESS", > "Trying to ping 172.17.1.16 for local network 172.17.1.0/24.", > "Ping to 172.17.1.16 succeeded.", > "SUCCESS", > "Trying to ping 172.17.2.15 for local network 172.17.2.0/24.", > "Ping to 172.17.2.15 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.18 for local network 172.17.3.0/24.", > "Ping to 172.17.3.18 succeeded.", > "SUCCESS", > "Trying to ping 172.17.4.17 for local network 172.17.4.0/24.", > "Ping to 172.17.4.17 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.8 for local network 192.168.24.0/24.", > "Ping to 192.168.24.8 succeeded.", > "SUCCESS", > "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", > "SUCCESS", > "", > "[2018-06-21 07:17:54,916] (heat-config) [DEBUG] ", > "[2018-06-21 07:17:54,916] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/7b44e291-6842-4e34-b4b9-8ff041f059e6", > "", > "[2018-06-21 07:17:54,920] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-21 07:17:54,920] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.json < /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.notify.json", > "[2018-06-21 07:17:55,329] (heat-config) [INFO] ", > "[2018-06-21 07:17:55,330] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:17:55,022 p=23396 u=mistral | TASK [Check-mode for Run deployment ControllerAllNodesValidationDeployment] **** >2018-06-21 07:17:55,036 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:55,056 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:17:55,149 p=23396 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "fa6e2ac8-f729-44b7-bffa-bd0a40a6403c"}, "changed": false} >2018-06-21 07:17:55,172 p=23396 u=mistral | TASK [Render deployment file for ControllerHostPrepDeployment] ***************** >2018-06-21 07:17:55,825 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "ded7fe2538da9129c456f951c4dfcc647398427f", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerHostPrepDeployment-fa6e2ac8-f729-44b7-bffa-bd0a40a6403c", "gid": 0, "group": "root", "md5sum": "8874b856f10b7e9057cf3fe5dbd5ecd0", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 45397, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579875.26-42514759982372/source", "state": "file", "uid": 0} >2018-06-21 07:17:55,848 p=23396 u=mistral | TASK [Check if deployed file exists for ControllerHostPrepDeployment] ********** >2018-06-21 07:17:56,180 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:17:56,204 p=23396 u=mistral | TASK [Check previous deployment rc for ControllerHostPrepDeployment] *********** >2018-06-21 07:17:56,220 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:56,241 p=23396 u=mistral | TASK [Remove deployed file for ControllerHostPrepDeployment when previous deployment failed] *** >2018-06-21 07:17:56,257 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:56,280 p=23396 u=mistral | TASK [Force remove deployed file for ControllerHostPrepDeployment] ************* >2018-06-21 07:17:56,298 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:17:56,320 p=23396 u=mistral | TASK [Run deployment ControllerHostPrepDeployment] ***************************** >2018-06-21 07:18:18,985 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.notify.json)", "delta": "0:00:22.305542", "end": "2018-06-21 07:18:19.376389", "rc": 0, "start": "2018-06-21 07:17:57.070847", "stderr": "[2018-06-21 07:17:57,096] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.json\n[2018-06-21 07:18:18,966] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/aodh)\\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\\n\\nTASK [aodh logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b6cf6dbe054f430c33d39c1a1a88593536d6e659\\\", \\\"msg\\\": \\\"Destination directory /var/log/aodh does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/cinder)\\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\\n\\nTASK [cinder logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\\\", \\\"msg\\\": \\\"Destination directory /var/log/cinder does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/cinder)\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\nok: [localhost] => (item=/var/lib/cinder)\\n\\nTASK [cinder_enable_iscsi_backend fact] ****************************************\\nok: [localhost]\\n\\nTASK [cinder create LVM volume group dd] ***************************************\\nskipping: [localhost]\\n\\nTASK [cinder create LVM volume group] ******************************************\\nskipping: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/glance)\\n\\nTASK [glance logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"e368ae3272baeb19e1113009ea5dae00e797c919\\\", \\\"msg\\\": \\\"Destination directory /var/log/glance does not exist\\\"}\\n...ignoring\\n\\nTASK [set_fact] ****************************************************************\\nskipping: [localhost]\\n\\nTASK [file] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [stat] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [copy] ********************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \\n\\nTASK [mount] *******************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \\n\\nTASK [Mount Node Staging Location] *********************************************\\nskipping: [localhost]\\n\\nTASK [Mount NFS on host] *******************************************************\\nskipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) \\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\\n\\nTASK [gnocchi logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\\\", \\\"msg\\\": \\\"Destination directory /var/log/gnocchi does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [get parameters] **********************************************************\\nok: [localhost]\\n\\nTASK [get DeployedSSLCertificatePath attributes] *******************************\\nskipping: [localhost]\\n\\nTASK [Assign bootstrap node] ***************************************************\\nskipping: [localhost]\\n\\nTASK [set is_bootstrap_node fact] **********************************************\\nskipping: [localhost]\\n\\nTASK [get haproxy status] ******************************************************\\nskipping: [localhost]\\n\\nTASK [get pacemaker status] ****************************************************\\nskipping: [localhost]\\n\\nTASK [get docker status] *******************************************************\\nskipping: [localhost]\\n\\nTASK [get container_id] ********************************************************\\nskipping: [localhost]\\n\\nTASK [get pcs resource name for haproxy container] *****************************\\nskipping: [localhost]\\n\\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\\nskipping: [localhost]\\n\\nTASK [push certificate content] ************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate ownership] ***********************************************\\nskipping: [localhost]\\n\\nTASK [reload haproxy if enabled] ***********************************************\\nskipping: [localhost]\\n\\nTASK [restart pacemaker resource for haproxy] **********************************\\nskipping: [localhost]\\n\\nTASK [set kolla_dir fact] ******************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate group on host via container] *****************************\\nskipping: [localhost]\\n\\nTASK [copy certificate from kolla directory to final location] *****************\\nskipping: [localhost]\\n\\nTASK [send restart order to haproxy container] *********************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/haproxy)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\\n\\nTASK [heat logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"d30ca3bda176434d31659e7379616dd162ddb246\\\", \\\"msg\\\": \\\"Destination directory /var/log/heat does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/horizon)\\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\\n\\nTASK [horizon logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ac324739761cb36b925d6e309482e26f7fe49b91\\\", \\\"msg\\\": \\\"Destination directory /var/log/horizon does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/keystone)\\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\\n\\nTASK [keystone logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"910be882addb6df99267e9bd303f6d9bf658562e\\\", \\\"msg\\\": \\\"Destination directory /var/log/keystone does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [memcached logs readme] ***************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/log/containers/mysql)\\nok: [localhost] => (item=/var/lib/mysql)\\n\\nTASK [mysql logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [create /var/lib/neutron] *************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/panko)\\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\\n\\nTASK [panko logs readme] *******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"903397bbd82e9b1f53087e3d7e8975d851857ce2\\\", \\\"msg\\\": \\\"Destination directory /var/log/panko does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/rabbitmq)\\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\\n\\nTASK [rabbitmq logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ee241f2199f264c9d0f384cf389fe255e8bf8a77\\\", \\\"msg\\\": \\\"Destination directory /var/log/rabbitmq does not exist\\\"}\\n...ignoring\\n\\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/redis)\\nchanged: [localhost] => (item=/var/log/containers/redis)\\nok: [localhost] => (item=/var/run/redis)\\n\\nTASK [redis logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create /var/lib/sahara] **************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent sahara logs directory] *********************************\\nchanged: [localhost]\\n\\nTASK [sahara logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b0212a1177fa4a88502d17a1cbc31198040cf047\\\", \\\"msg\\\": \\\"Destination directory /var/log/sahara does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/srv/node)\\nchanged: [localhost] => (item=/var/log/swift)\\n\\nTASK [Create swift logging symlink] ********************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/srv/node)\\nok: [localhost] => (item=/var/log/swift)\\nok: [localhost] => (item=/var/log/containers)\\n\\nTASK [Set swift_use_local_disks fact] ******************************************\\nok: [localhost]\\n\\nTASK [Create Swift d1 directory if needed] *************************************\\nchanged: [localhost]\\n\\nTASK [swift logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [Format SwiftRawDisks] ****************************************************\\n\\nTASK [Mount devices defined in SwiftRawDisks] **********************************\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=60 changed=33 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-21 07:18:18,966] (heat-config) [DEBUG] [2018-06-21 07:17:57,119] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c_variables.json\n[2018-06-21 07:18:18,961] (heat-config) [INFO] Return code 0\n[2018-06-21 07:18:18,961] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/aodh)\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\n\nTASK [aodh logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b6cf6dbe054f430c33d39c1a1a88593536d6e659\", \"msg\": \"Destination directory /var/log/aodh does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [ceilometer logs readme] **************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/cinder)\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\n\nTASK [cinder logs readme] ******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\", \"msg\": \"Destination directory /var/log/cinder does not exist\"}\n...ignoring\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/lib/cinder)\nok: [localhost] => (item=/var/log/containers/cinder)\n\nTASK [ensure ceph configurations exist] ****************************************\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/log/containers/cinder)\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/log/containers/cinder)\nok: [localhost] => (item=/var/lib/cinder)\n\nTASK [cinder_enable_iscsi_backend fact] ****************************************\nok: [localhost]\n\nTASK [cinder create LVM volume group dd] ***************************************\nskipping: [localhost]\n\nTASK [cinder create LVM volume group] ******************************************\nskipping: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/glance)\n\nTASK [glance logs readme] ******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"e368ae3272baeb19e1113009ea5dae00e797c919\", \"msg\": \"Destination directory /var/log/glance does not exist\"}\n...ignoring\n\nTASK [set_fact] ****************************************************************\nskipping: [localhost]\n\nTASK [file] ********************************************************************\nskipping: [localhost]\n\nTASK [stat] ********************************************************************\nskipping: [localhost]\n\nTASK [copy] ********************************************************************\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \n\nTASK [mount] *******************************************************************\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \n\nTASK [Mount Node Staging Location] *********************************************\nskipping: [localhost]\n\nTASK [Mount NFS on host] *******************************************************\nskipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) \n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\n\nTASK [gnocchi logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\", \"msg\": \"Destination directory /var/log/gnocchi does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [get parameters] **********************************************************\nok: [localhost]\n\nTASK [get DeployedSSLCertificatePath attributes] *******************************\nskipping: [localhost]\n\nTASK [Assign bootstrap node] ***************************************************\nskipping: [localhost]\n\nTASK [set is_bootstrap_node fact] **********************************************\nskipping: [localhost]\n\nTASK [get haproxy status] ******************************************************\nskipping: [localhost]\n\nTASK [get pacemaker status] ****************************************************\nskipping: [localhost]\n\nTASK [get docker status] *******************************************************\nskipping: [localhost]\n\nTASK [get container_id] ********************************************************\nskipping: [localhost]\n\nTASK [get pcs resource name for haproxy container] *****************************\nskipping: [localhost]\n\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\nskipping: [localhost]\n\nTASK [push certificate content] ************************************************\nskipping: [localhost]\n\nTASK [set certificate ownership] ***********************************************\nskipping: [localhost]\n\nTASK [reload haproxy if enabled] ***********************************************\nskipping: [localhost]\n\nTASK [restart pacemaker resource for haproxy] **********************************\nskipping: [localhost]\n\nTASK [set kolla_dir fact] ******************************************************\nskipping: [localhost]\n\nTASK [set certificate group on host via container] *****************************\nskipping: [localhost]\n\nTASK [copy certificate from kolla directory to final location] *****************\nskipping: [localhost]\n\nTASK [send restart order to haproxy container] *********************************\nskipping: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/lib/haproxy)\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/heat)\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\n\nTASK [heat logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"d30ca3bda176434d31659e7379616dd162ddb246\", \"msg\": \"Destination directory /var/log/heat does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost] => (item=/var/log/containers/heat)\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/horizon)\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\n\nTASK [horizon logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ac324739761cb36b925d6e309482e26f7fe49b91\", \"msg\": \"Destination directory /var/log/horizon does not exist\"}\n...ignoring\n\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\nok: [localhost]\n\nTASK [Stop and disable iscsid.socket service] **********************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/keystone)\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\n\nTASK [keystone logs readme] ****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"910be882addb6df99267e9bd303f6d9bf658562e\", \"msg\": \"Destination directory /var/log/keystone does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [memcached logs readme] ***************************************************\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/log/containers/mysql)\nok: [localhost] => (item=/var/lib/mysql)\n\nTASK [mysql logs readme] *******************************************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/neutron)\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\n\nTASK [neutron logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost] => (item=/var/log/containers/neutron)\n\nTASK [create /var/lib/neutron] *************************************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/nova)\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\n\nTASK [nova logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost] => (item=/var/log/containers/nova)\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/panko)\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\n\nTASK [panko logs readme] *******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"903397bbd82e9b1f53087e3d7e8975d851857ce2\", \"msg\": \"Destination directory /var/log/panko does not exist\"}\n...ignoring\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/lib/rabbitmq)\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\n\nTASK [rabbitmq logs readme] ****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ee241f2199f264c9d0f384cf389fe255e8bf8a77\", \"msg\": \"Destination directory /var/log/rabbitmq does not exist\"}\n...ignoring\n\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/lib/redis)\nchanged: [localhost] => (item=/var/log/containers/redis)\nok: [localhost] => (item=/var/run/redis)\n\nTASK [redis logs readme] *******************************************************\nchanged: [localhost]\n\nTASK [create /var/lib/sahara] **************************************************\nchanged: [localhost]\n\nTASK [create persistent sahara logs directory] *********************************\nchanged: [localhost]\n\nTASK [sahara logs readme] ******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b0212a1177fa4a88502d17a1cbc31198040cf047\", \"msg\": \"Destination directory /var/log/sahara does not exist\"}\n...ignoring\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/srv/node)\nchanged: [localhost] => (item=/var/log/swift)\n\nTASK [Create swift logging symlink] ********************************************\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/srv/node)\nok: [localhost] => (item=/var/log/swift)\nok: [localhost] => (item=/var/log/containers)\n\nTASK [Set swift_use_local_disks fact] ******************************************\nok: [localhost]\n\nTASK [Create Swift d1 directory if needed] *************************************\nchanged: [localhost]\n\nTASK [swift logs readme] *******************************************************\nchanged: [localhost]\n\nTASK [Format SwiftRawDisks] ****************************************************\n\nTASK [Mount devices defined in SwiftRawDisks] **********************************\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=60 changed=33 unreachable=0 failed=0 \n\n\n[2018-06-21 07:18:18,962] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c_playbook.yaml\n\n[2018-06-21 07:18:18,966] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-06-21 07:18:18,967] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.json < /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.notify.json\n[2018-06-21 07:18:19,369] (heat-config) [INFO] \n[2018-06-21 07:18:19,369] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:17:57,096] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.json", "[2018-06-21 07:18:18,966] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/aodh)\\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\\n\\nTASK [aodh logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b6cf6dbe054f430c33d39c1a1a88593536d6e659\\\", \\\"msg\\\": \\\"Destination directory /var/log/aodh does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/cinder)\\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\\n\\nTASK [cinder logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\\\", \\\"msg\\\": \\\"Destination directory /var/log/cinder does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/cinder)\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\nok: [localhost] => (item=/var/lib/cinder)\\n\\nTASK [cinder_enable_iscsi_backend fact] ****************************************\\nok: [localhost]\\n\\nTASK [cinder create LVM volume group dd] ***************************************\\nskipping: [localhost]\\n\\nTASK [cinder create LVM volume group] ******************************************\\nskipping: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/glance)\\n\\nTASK [glance logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"e368ae3272baeb19e1113009ea5dae00e797c919\\\", \\\"msg\\\": \\\"Destination directory /var/log/glance does not exist\\\"}\\n...ignoring\\n\\nTASK [set_fact] ****************************************************************\\nskipping: [localhost]\\n\\nTASK [file] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [stat] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [copy] ********************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \\n\\nTASK [mount] *******************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \\n\\nTASK [Mount Node Staging Location] *********************************************\\nskipping: [localhost]\\n\\nTASK [Mount NFS on host] *******************************************************\\nskipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) \\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\\n\\nTASK [gnocchi logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\\\", \\\"msg\\\": \\\"Destination directory /var/log/gnocchi does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [get parameters] **********************************************************\\nok: [localhost]\\n\\nTASK [get DeployedSSLCertificatePath attributes] *******************************\\nskipping: [localhost]\\n\\nTASK [Assign bootstrap node] ***************************************************\\nskipping: [localhost]\\n\\nTASK [set is_bootstrap_node fact] **********************************************\\nskipping: [localhost]\\n\\nTASK [get haproxy status] ******************************************************\\nskipping: [localhost]\\n\\nTASK [get pacemaker status] ****************************************************\\nskipping: [localhost]\\n\\nTASK [get docker status] *******************************************************\\nskipping: [localhost]\\n\\nTASK [get container_id] ********************************************************\\nskipping: [localhost]\\n\\nTASK [get pcs resource name for haproxy container] *****************************\\nskipping: [localhost]\\n\\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\\nskipping: [localhost]\\n\\nTASK [push certificate content] ************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate ownership] ***********************************************\\nskipping: [localhost]\\n\\nTASK [reload haproxy if enabled] ***********************************************\\nskipping: [localhost]\\n\\nTASK [restart pacemaker resource for haproxy] **********************************\\nskipping: [localhost]\\n\\nTASK [set kolla_dir fact] ******************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate group on host via container] *****************************\\nskipping: [localhost]\\n\\nTASK [copy certificate from kolla directory to final location] *****************\\nskipping: [localhost]\\n\\nTASK [send restart order to haproxy container] *********************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/haproxy)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\\n\\nTASK [heat logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"d30ca3bda176434d31659e7379616dd162ddb246\\\", \\\"msg\\\": \\\"Destination directory /var/log/heat does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/horizon)\\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\\n\\nTASK [horizon logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ac324739761cb36b925d6e309482e26f7fe49b91\\\", \\\"msg\\\": \\\"Destination directory /var/log/horizon does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/keystone)\\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\\n\\nTASK [keystone logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"910be882addb6df99267e9bd303f6d9bf658562e\\\", \\\"msg\\\": \\\"Destination directory /var/log/keystone does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [memcached logs readme] ***************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/log/containers/mysql)\\nok: [localhost] => (item=/var/lib/mysql)\\n\\nTASK [mysql logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [create /var/lib/neutron] *************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/panko)\\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\\n\\nTASK [panko logs readme] *******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"903397bbd82e9b1f53087e3d7e8975d851857ce2\\\", \\\"msg\\\": \\\"Destination directory /var/log/panko does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/rabbitmq)\\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\\n\\nTASK [rabbitmq logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ee241f2199f264c9d0f384cf389fe255e8bf8a77\\\", \\\"msg\\\": \\\"Destination directory /var/log/rabbitmq does not exist\\\"}\\n...ignoring\\n\\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/redis)\\nchanged: [localhost] => (item=/var/log/containers/redis)\\nok: [localhost] => (item=/var/run/redis)\\n\\nTASK [redis logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create /var/lib/sahara] **************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent sahara logs directory] *********************************\\nchanged: [localhost]\\n\\nTASK [sahara logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b0212a1177fa4a88502d17a1cbc31198040cf047\\\", \\\"msg\\\": \\\"Destination directory /var/log/sahara does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/srv/node)\\nchanged: [localhost] => (item=/var/log/swift)\\n\\nTASK [Create swift logging symlink] ********************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/srv/node)\\nok: [localhost] => (item=/var/log/swift)\\nok: [localhost] => (item=/var/log/containers)\\n\\nTASK [Set swift_use_local_disks fact] ******************************************\\nok: [localhost]\\n\\nTASK [Create Swift d1 directory if needed] *************************************\\nchanged: [localhost]\\n\\nTASK [swift logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [Format SwiftRawDisks] ****************************************************\\n\\nTASK [Mount devices defined in SwiftRawDisks] **********************************\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=60 changed=33 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-21 07:18:18,966] (heat-config) [DEBUG] [2018-06-21 07:17:57,119] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c_variables.json", "[2018-06-21 07:18:18,961] (heat-config) [INFO] Return code 0", "[2018-06-21 07:18:18,961] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/aodh)", "changed: [localhost] => (item=/var/log/containers/httpd/aodh-api)", "", "TASK [aodh logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b6cf6dbe054f430c33d39c1a1a88593536d6e659\", \"msg\": \"Destination directory /var/log/aodh does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [ceilometer logs readme] **************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/cinder)", "changed: [localhost] => (item=/var/log/containers/httpd/cinder-api)", "", "TASK [cinder logs readme] ******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\", \"msg\": \"Destination directory /var/log/cinder does not exist\"}", "...ignoring", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/lib/cinder)", "ok: [localhost] => (item=/var/log/containers/cinder)", "", "TASK [ensure ceph configurations exist] ****************************************", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/log/containers/cinder)", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/log/containers/cinder)", "ok: [localhost] => (item=/var/lib/cinder)", "", "TASK [cinder_enable_iscsi_backend fact] ****************************************", "ok: [localhost]", "", "TASK [cinder create LVM volume group dd] ***************************************", "skipping: [localhost]", "", "TASK [cinder create LVM volume group] ******************************************", "skipping: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/glance)", "", "TASK [glance logs readme] ******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"e368ae3272baeb19e1113009ea5dae00e797c919\", \"msg\": \"Destination directory /var/log/glance does not exist\"}", "...ignoring", "", "TASK [set_fact] ****************************************************************", "skipping: [localhost]", "", "TASK [file] ********************************************************************", "skipping: [localhost]", "", "TASK [stat] ********************************************************************", "skipping: [localhost]", "", "TASK [copy] ********************************************************************", "skipping: [localhost] => (item={u'NETAPP_SHARE': u''}) ", "", "TASK [mount] *******************************************************************", "skipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) ", "", "TASK [Mount Node Staging Location] *********************************************", "skipping: [localhost]", "", "TASK [Mount NFS on host] *******************************************************", "skipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) ", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/gnocchi)", "changed: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)", "", "TASK [gnocchi logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\", \"msg\": \"Destination directory /var/log/gnocchi does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [get parameters] **********************************************************", "ok: [localhost]", "", "TASK [get DeployedSSLCertificatePath attributes] *******************************", "skipping: [localhost]", "", "TASK [Assign bootstrap node] ***************************************************", "skipping: [localhost]", "", "TASK [set is_bootstrap_node fact] **********************************************", "skipping: [localhost]", "", "TASK [get haproxy status] ******************************************************", "skipping: [localhost]", "", "TASK [get pacemaker status] ****************************************************", "skipping: [localhost]", "", "TASK [get docker status] *******************************************************", "skipping: [localhost]", "", "TASK [get container_id] ********************************************************", "skipping: [localhost]", "", "TASK [get pcs resource name for haproxy container] *****************************", "skipping: [localhost]", "", "TASK [remove DeployedSSLCertificatePath if is dir] *****************************", "skipping: [localhost]", "", "TASK [push certificate content] ************************************************", "skipping: [localhost]", "", "TASK [set certificate ownership] ***********************************************", "skipping: [localhost]", "", "TASK [reload haproxy if enabled] ***********************************************", "skipping: [localhost]", "", "TASK [restart pacemaker resource for haproxy] **********************************", "skipping: [localhost]", "", "TASK [set kolla_dir fact] ******************************************************", "skipping: [localhost]", "", "TASK [set certificate group on host via container] *****************************", "skipping: [localhost]", "", "TASK [copy certificate from kolla directory to final location] *****************", "skipping: [localhost]", "", "TASK [send restart order to haproxy container] *********************************", "skipping: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/lib/haproxy)", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/heat)", "changed: [localhost] => (item=/var/log/containers/httpd/heat-api)", "", "TASK [heat logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"d30ca3bda176434d31659e7379616dd162ddb246\", \"msg\": \"Destination directory /var/log/heat does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost] => (item=/var/log/containers/heat)", "changed: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/horizon)", "changed: [localhost] => (item=/var/log/containers/httpd/horizon)", "", "TASK [horizon logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ac324739761cb36b925d6e309482e26f7fe49b91\", \"msg\": \"Destination directory /var/log/horizon does not exist\"}", "...ignoring", "", "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", "ok: [localhost]", "", "TASK [Stop and disable iscsid.socket service] **********************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/keystone)", "changed: [localhost] => (item=/var/log/containers/httpd/keystone)", "", "TASK [keystone logs readme] ****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"910be882addb6df99267e9bd303f6d9bf658562e\", \"msg\": \"Destination directory /var/log/keystone does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [memcached logs readme] ***************************************************", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/log/containers/mysql)", "ok: [localhost] => (item=/var/lib/mysql)", "", "TASK [mysql logs readme] *******************************************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/neutron)", "changed: [localhost] => (item=/var/log/containers/httpd/neutron-api)", "", "TASK [neutron logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost] => (item=/var/log/containers/neutron)", "", "TASK [create /var/lib/neutron] *************************************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/nova)", "changed: [localhost] => (item=/var/log/containers/httpd/nova-api)", "", "TASK [nova logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost] => (item=/var/log/containers/nova)", "changed: [localhost] => (item=/var/log/containers/httpd/nova-placement)", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/panko)", "changed: [localhost] => (item=/var/log/containers/httpd/panko-api)", "", "TASK [panko logs readme] *******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"903397bbd82e9b1f53087e3d7e8975d851857ce2\", \"msg\": \"Destination directory /var/log/panko does not exist\"}", "...ignoring", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/lib/rabbitmq)", "changed: [localhost] => (item=/var/log/containers/rabbitmq)", "", "TASK [rabbitmq logs readme] ****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ee241f2199f264c9d0f384cf389fe255e8bf8a77\", \"msg\": \"Destination directory /var/log/rabbitmq does not exist\"}", "...ignoring", "", "TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/lib/redis)", "changed: [localhost] => (item=/var/log/containers/redis)", "ok: [localhost] => (item=/var/run/redis)", "", "TASK [redis logs readme] *******************************************************", "changed: [localhost]", "", "TASK [create /var/lib/sahara] **************************************************", "changed: [localhost]", "", "TASK [create persistent sahara logs directory] *********************************", "changed: [localhost]", "", "TASK [sahara logs readme] ******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b0212a1177fa4a88502d17a1cbc31198040cf047\", \"msg\": \"Destination directory /var/log/sahara does not exist\"}", "...ignoring", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/srv/node)", "changed: [localhost] => (item=/var/log/swift)", "", "TASK [Create swift logging symlink] ********************************************", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/srv/node)", "ok: [localhost] => (item=/var/log/swift)", "ok: [localhost] => (item=/var/log/containers)", "", "TASK [Set swift_use_local_disks fact] ******************************************", "ok: [localhost]", "", "TASK [Create Swift d1 directory if needed] *************************************", "changed: [localhost]", "", "TASK [swift logs readme] *******************************************************", "changed: [localhost]", "", "TASK [Format SwiftRawDisks] ****************************************************", "", "TASK [Mount devices defined in SwiftRawDisks] **********************************", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=60 changed=33 unreachable=0 failed=0 ", "", "", "[2018-06-21 07:18:18,962] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c_playbook.yaml", "", "[2018-06-21 07:18:18,966] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-06-21 07:18:18,967] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.json < /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.notify.json", "[2018-06-21 07:18:19,369] (heat-config) [INFO] ", "[2018-06-21 07:18:19,369] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:18:19,011 p=23396 u=mistral | TASK [Output for ControllerHostPrepDeployment] ********************************* >2018-06-21 07:18:19,128 p=23396 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:17:57,096] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.json", > "[2018-06-21 07:18:18,966] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/aodh)\\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\\n\\nTASK [aodh logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b6cf6dbe054f430c33d39c1a1a88593536d6e659\\\", \\\"msg\\\": \\\"Destination directory /var/log/aodh does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/cinder)\\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\\n\\nTASK [cinder logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\\\", \\\"msg\\\": \\\"Destination directory /var/log/cinder does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/cinder)\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\nok: [localhost] => (item=/var/lib/cinder)\\n\\nTASK [cinder_enable_iscsi_backend fact] ****************************************\\nok: [localhost]\\n\\nTASK [cinder create LVM volume group dd] ***************************************\\nskipping: [localhost]\\n\\nTASK [cinder create LVM volume group] ******************************************\\nskipping: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/glance)\\n\\nTASK [glance logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"e368ae3272baeb19e1113009ea5dae00e797c919\\\", \\\"msg\\\": \\\"Destination directory /var/log/glance does not exist\\\"}\\n...ignoring\\n\\nTASK [set_fact] ****************************************************************\\nskipping: [localhost]\\n\\nTASK [file] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [stat] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [copy] ********************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \\n\\nTASK [mount] *******************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \\n\\nTASK [Mount Node Staging Location] *********************************************\\nskipping: [localhost]\\n\\nTASK [Mount NFS on host] *******************************************************\\nskipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) \\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\\n\\nTASK [gnocchi logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\\\", \\\"msg\\\": \\\"Destination directory /var/log/gnocchi does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [get parameters] **********************************************************\\nok: [localhost]\\n\\nTASK [get DeployedSSLCertificatePath attributes] *******************************\\nskipping: [localhost]\\n\\nTASK [Assign bootstrap node] ***************************************************\\nskipping: [localhost]\\n\\nTASK [set is_bootstrap_node fact] **********************************************\\nskipping: [localhost]\\n\\nTASK [get haproxy status] ******************************************************\\nskipping: [localhost]\\n\\nTASK [get pacemaker status] ****************************************************\\nskipping: [localhost]\\n\\nTASK [get docker status] *******************************************************\\nskipping: [localhost]\\n\\nTASK [get container_id] ********************************************************\\nskipping: [localhost]\\n\\nTASK [get pcs resource name for haproxy container] *****************************\\nskipping: [localhost]\\n\\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\\nskipping: [localhost]\\n\\nTASK [push certificate content] ************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate ownership] ***********************************************\\nskipping: [localhost]\\n\\nTASK [reload haproxy if enabled] ***********************************************\\nskipping: [localhost]\\n\\nTASK [restart pacemaker resource for haproxy] **********************************\\nskipping: [localhost]\\n\\nTASK [set kolla_dir fact] ******************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate group on host via container] *****************************\\nskipping: [localhost]\\n\\nTASK [copy certificate from kolla directory to final location] *****************\\nskipping: [localhost]\\n\\nTASK [send restart order to haproxy container] *********************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/haproxy)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\\n\\nTASK [heat logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"d30ca3bda176434d31659e7379616dd162ddb246\\\", \\\"msg\\\": \\\"Destination directory /var/log/heat does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/horizon)\\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\\n\\nTASK [horizon logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ac324739761cb36b925d6e309482e26f7fe49b91\\\", \\\"msg\\\": \\\"Destination directory /var/log/horizon does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/keystone)\\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\\n\\nTASK [keystone logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"910be882addb6df99267e9bd303f6d9bf658562e\\\", \\\"msg\\\": \\\"Destination directory /var/log/keystone does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [memcached logs readme] ***************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/log/containers/mysql)\\nok: [localhost] => (item=/var/lib/mysql)\\n\\nTASK [mysql logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [create /var/lib/neutron] *************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/panko)\\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\\n\\nTASK [panko logs readme] *******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"903397bbd82e9b1f53087e3d7e8975d851857ce2\\\", \\\"msg\\\": \\\"Destination directory /var/log/panko does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/rabbitmq)\\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\\n\\nTASK [rabbitmq logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ee241f2199f264c9d0f384cf389fe255e8bf8a77\\\", \\\"msg\\\": \\\"Destination directory /var/log/rabbitmq does not exist\\\"}\\n...ignoring\\n\\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/redis)\\nchanged: [localhost] => (item=/var/log/containers/redis)\\nok: [localhost] => (item=/var/run/redis)\\n\\nTASK [redis logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create /var/lib/sahara] **************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent sahara logs directory] *********************************\\nchanged: [localhost]\\n\\nTASK [sahara logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b0212a1177fa4a88502d17a1cbc31198040cf047\\\", \\\"msg\\\": \\\"Destination directory /var/log/sahara does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/srv/node)\\nchanged: [localhost] => (item=/var/log/swift)\\n\\nTASK [Create swift logging symlink] ********************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/srv/node)\\nok: [localhost] => (item=/var/log/swift)\\nok: [localhost] => (item=/var/log/containers)\\n\\nTASK [Set swift_use_local_disks fact] ******************************************\\nok: [localhost]\\n\\nTASK [Create Swift d1 directory if needed] *************************************\\nchanged: [localhost]\\n\\nTASK [swift logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [Format SwiftRawDisks] ****************************************************\\n\\nTASK [Mount devices defined in SwiftRawDisks] **********************************\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=60 changed=33 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-21 07:18:18,966] (heat-config) [DEBUG] [2018-06-21 07:17:57,119] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c_variables.json", > "[2018-06-21 07:18:18,961] (heat-config) [INFO] Return code 0", > "[2018-06-21 07:18:18,961] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/aodh)", > "changed: [localhost] => (item=/var/log/containers/httpd/aodh-api)", > "", > "TASK [aodh logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b6cf6dbe054f430c33d39c1a1a88593536d6e659\", \"msg\": \"Destination directory /var/log/aodh does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [ceilometer logs readme] **************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/cinder)", > "changed: [localhost] => (item=/var/log/containers/httpd/cinder-api)", > "", > "TASK [cinder logs readme] ******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\", \"msg\": \"Destination directory /var/log/cinder does not exist\"}", > "...ignoring", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/lib/cinder)", > "ok: [localhost] => (item=/var/log/containers/cinder)", > "", > "TASK [ensure ceph configurations exist] ****************************************", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/log/containers/cinder)", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/log/containers/cinder)", > "ok: [localhost] => (item=/var/lib/cinder)", > "", > "TASK [cinder_enable_iscsi_backend fact] ****************************************", > "ok: [localhost]", > "", > "TASK [cinder create LVM volume group dd] ***************************************", > "skipping: [localhost]", > "", > "TASK [cinder create LVM volume group] ******************************************", > "skipping: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/glance)", > "", > "TASK [glance logs readme] ******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"e368ae3272baeb19e1113009ea5dae00e797c919\", \"msg\": \"Destination directory /var/log/glance does not exist\"}", > "...ignoring", > "", > "TASK [set_fact] ****************************************************************", > "skipping: [localhost]", > "", > "TASK [file] ********************************************************************", > "skipping: [localhost]", > "", > "TASK [stat] ********************************************************************", > "skipping: [localhost]", > "", > "TASK [copy] ********************************************************************", > "skipping: [localhost] => (item={u'NETAPP_SHARE': u''}) ", > "", > "TASK [mount] *******************************************************************", > "skipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) ", > "", > "TASK [Mount Node Staging Location] *********************************************", > "skipping: [localhost]", > "", > "TASK [Mount NFS on host] *******************************************************", > "skipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) ", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/gnocchi)", > "changed: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)", > "", > "TASK [gnocchi logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\", \"msg\": \"Destination directory /var/log/gnocchi does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [get parameters] **********************************************************", > "ok: [localhost]", > "", > "TASK [get DeployedSSLCertificatePath attributes] *******************************", > "skipping: [localhost]", > "", > "TASK [Assign bootstrap node] ***************************************************", > "skipping: [localhost]", > "", > "TASK [set is_bootstrap_node fact] **********************************************", > "skipping: [localhost]", > "", > "TASK [get haproxy status] ******************************************************", > "skipping: [localhost]", > "", > "TASK [get pacemaker status] ****************************************************", > "skipping: [localhost]", > "", > "TASK [get docker status] *******************************************************", > "skipping: [localhost]", > "", > "TASK [get container_id] ********************************************************", > "skipping: [localhost]", > "", > "TASK [get pcs resource name for haproxy container] *****************************", > "skipping: [localhost]", > "", > "TASK [remove DeployedSSLCertificatePath if is dir] *****************************", > "skipping: [localhost]", > "", > "TASK [push certificate content] ************************************************", > "skipping: [localhost]", > "", > "TASK [set certificate ownership] ***********************************************", > "skipping: [localhost]", > "", > "TASK [reload haproxy if enabled] ***********************************************", > "skipping: [localhost]", > "", > "TASK [restart pacemaker resource for haproxy] **********************************", > "skipping: [localhost]", > "", > "TASK [set kolla_dir fact] ******************************************************", > "skipping: [localhost]", > "", > "TASK [set certificate group on host via container] *****************************", > "skipping: [localhost]", > "", > "TASK [copy certificate from kolla directory to final location] *****************", > "skipping: [localhost]", > "", > "TASK [send restart order to haproxy container] *********************************", > "skipping: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/lib/haproxy)", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/heat)", > "changed: [localhost] => (item=/var/log/containers/httpd/heat-api)", > "", > "TASK [heat logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"d30ca3bda176434d31659e7379616dd162ddb246\", \"msg\": \"Destination directory /var/log/heat does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost] => (item=/var/log/containers/heat)", > "changed: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/horizon)", > "changed: [localhost] => (item=/var/log/containers/httpd/horizon)", > "", > "TASK [horizon logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ac324739761cb36b925d6e309482e26f7fe49b91\", \"msg\": \"Destination directory /var/log/horizon does not exist\"}", > "...ignoring", > "", > "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", > "ok: [localhost]", > "", > "TASK [Stop and disable iscsid.socket service] **********************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/keystone)", > "changed: [localhost] => (item=/var/log/containers/httpd/keystone)", > "", > "TASK [keystone logs readme] ****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"910be882addb6df99267e9bd303f6d9bf658562e\", \"msg\": \"Destination directory /var/log/keystone does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [memcached logs readme] ***************************************************", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/log/containers/mysql)", > "ok: [localhost] => (item=/var/lib/mysql)", > "", > "TASK [mysql logs readme] *******************************************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/neutron)", > "changed: [localhost] => (item=/var/log/containers/httpd/neutron-api)", > "", > "TASK [neutron logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost] => (item=/var/log/containers/neutron)", > "", > "TASK [create /var/lib/neutron] *************************************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/nova)", > "changed: [localhost] => (item=/var/log/containers/httpd/nova-api)", > "", > "TASK [nova logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost] => (item=/var/log/containers/nova)", > "changed: [localhost] => (item=/var/log/containers/httpd/nova-placement)", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/panko)", > "changed: [localhost] => (item=/var/log/containers/httpd/panko-api)", > "", > "TASK [panko logs readme] *******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"903397bbd82e9b1f53087e3d7e8975d851857ce2\", \"msg\": \"Destination directory /var/log/panko does not exist\"}", > "...ignoring", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/lib/rabbitmq)", > "changed: [localhost] => (item=/var/log/containers/rabbitmq)", > "", > "TASK [rabbitmq logs readme] ****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ee241f2199f264c9d0f384cf389fe255e8bf8a77\", \"msg\": \"Destination directory /var/log/rabbitmq does not exist\"}", > "...ignoring", > "", > "TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/lib/redis)", > "changed: [localhost] => (item=/var/log/containers/redis)", > "ok: [localhost] => (item=/var/run/redis)", > "", > "TASK [redis logs readme] *******************************************************", > "changed: [localhost]", > "", > "TASK [create /var/lib/sahara] **************************************************", > "changed: [localhost]", > "", > "TASK [create persistent sahara logs directory] *********************************", > "changed: [localhost]", > "", > "TASK [sahara logs readme] ******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b0212a1177fa4a88502d17a1cbc31198040cf047\", \"msg\": \"Destination directory /var/log/sahara does not exist\"}", > "...ignoring", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/srv/node)", > "changed: [localhost] => (item=/var/log/swift)", > "", > "TASK [Create swift logging symlink] ********************************************", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/srv/node)", > "ok: [localhost] => (item=/var/log/swift)", > "ok: [localhost] => (item=/var/log/containers)", > "", > "TASK [Set swift_use_local_disks fact] ******************************************", > "ok: [localhost]", > "", > "TASK [Create Swift d1 directory if needed] *************************************", > "changed: [localhost]", > "", > "TASK [swift logs readme] *******************************************************", > "changed: [localhost]", > "", > "TASK [Format SwiftRawDisks] ****************************************************", > "", > "TASK [Mount devices defined in SwiftRawDisks] **********************************", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=60 changed=33 unreachable=0 failed=0 ", > "", > "", > "[2018-06-21 07:18:18,962] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c_playbook.yaml", > "", > "[2018-06-21 07:18:18,966] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-06-21 07:18:18,967] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.json < /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.notify.json", > "[2018-06-21 07:18:19,369] (heat-config) [INFO] ", > "[2018-06-21 07:18:19,369] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:18:19,155 p=23396 u=mistral | TASK [Check-mode for Run deployment ControllerHostPrepDeployment] ************** >2018-06-21 07:18:19,170 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:19,192 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:18:19,287 p=23396 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "e9b6658c-6510-4121-b194-f3e7cec48261"}, "changed": false} >2018-06-21 07:18:19,310 p=23396 u=mistral | TASK [Render deployment file for ControllerArtifactsDeploy] ******************** >2018-06-21 07:18:20,000 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "0072b646effb6bfca43b18c6b3dc2346d26370f9", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerArtifactsDeploy-e9b6658c-6510-4121-b194-f3e7cec48261", "gid": 0, "group": "root", "md5sum": "480f3b706e20a3e602a94b1e9bf87050", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2021, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579899.41-243830547547035/source", "state": "file", "uid": 0} >2018-06-21 07:18:20,025 p=23396 u=mistral | TASK [Check if deployed file exists for ControllerArtifactsDeploy] ************* >2018-06-21 07:18:20,413 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:18:20,438 p=23396 u=mistral | TASK [Check previous deployment rc for ControllerArtifactsDeploy] ************** >2018-06-21 07:18:20,457 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:20,480 p=23396 u=mistral | TASK [Remove deployed file for ControllerArtifactsDeploy when previous deployment failed] *** >2018-06-21 07:18:20,498 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:20,521 p=23396 u=mistral | TASK [Force remove deployed file for ControllerArtifactsDeploy] **************** >2018-06-21 07:18:20,538 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:20,561 p=23396 u=mistral | TASK [Run deployment ControllerArtifactsDeploy] ******************************** >2018-06-21 07:18:21,421 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.notify.json)", "delta": "0:00:00.469314", "end": "2018-06-21 07:18:21.828409", "rc": 0, "start": "2018-06-21 07:18:21.359095", "stderr": "[2018-06-21 07:18:21,384] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.json\n[2018-06-21 07:18:21,416] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-21 07:18:21,417] (heat-config) [DEBUG] [2018-06-21 07:18:21,407] (heat-config) [INFO] artifact_urls=\n[2018-06-21 07:18:21,407] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307\n[2018-06-21 07:18:21,407] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-21 07:18:21,407] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-haw7i3vfvlpg-ControllerArtifactsDeploy-q53opecfia5y-0-5xvdvzojviyc/25608cf8-1b6a-4839-9d2c-8516e628ad25\n[2018-06-21 07:18:21,407] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-21 07:18:21,408] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-21 07:18:21,408] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/e9b6658c-6510-4121-b194-f3e7cec48261\n[2018-06-21 07:18:21,413] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-06-21 07:18:21,413] (heat-config) [DEBUG] \n[2018-06-21 07:18:21,413] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/e9b6658c-6510-4121-b194-f3e7cec48261\n\n[2018-06-21 07:18:21,417] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-21 07:18:21,417] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.json < /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.notify.json\n[2018-06-21 07:18:21,822] (heat-config) [INFO] \n[2018-06-21 07:18:21,822] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:18:21,384] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.json", "[2018-06-21 07:18:21,416] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-21 07:18:21,417] (heat-config) [DEBUG] [2018-06-21 07:18:21,407] (heat-config) [INFO] artifact_urls=", "[2018-06-21 07:18:21,407] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", "[2018-06-21 07:18:21,407] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-21 07:18:21,407] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-haw7i3vfvlpg-ControllerArtifactsDeploy-q53opecfia5y-0-5xvdvzojviyc/25608cf8-1b6a-4839-9d2c-8516e628ad25", "[2018-06-21 07:18:21,407] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-21 07:18:21,408] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-21 07:18:21,408] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/e9b6658c-6510-4121-b194-f3e7cec48261", "[2018-06-21 07:18:21,413] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-06-21 07:18:21,413] (heat-config) [DEBUG] ", "[2018-06-21 07:18:21,413] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/e9b6658c-6510-4121-b194-f3e7cec48261", "", "[2018-06-21 07:18:21,417] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-21 07:18:21,417] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.json < /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.notify.json", "[2018-06-21 07:18:21,822] (heat-config) [INFO] ", "[2018-06-21 07:18:21,822] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:18:21,447 p=23396 u=mistral | TASK [Output for ControllerArtifactsDeploy] ************************************ >2018-06-21 07:18:21,539 p=23396 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:18:21,384] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.json", > "[2018-06-21 07:18:21,416] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-21 07:18:21,417] (heat-config) [DEBUG] [2018-06-21 07:18:21,407] (heat-config) [INFO] artifact_urls=", > "[2018-06-21 07:18:21,407] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", > "[2018-06-21 07:18:21,407] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-21 07:18:21,407] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-haw7i3vfvlpg-ControllerArtifactsDeploy-q53opecfia5y-0-5xvdvzojviyc/25608cf8-1b6a-4839-9d2c-8516e628ad25", > "[2018-06-21 07:18:21,407] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-21 07:18:21,408] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-21 07:18:21,408] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/e9b6658c-6510-4121-b194-f3e7cec48261", > "[2018-06-21 07:18:21,413] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-06-21 07:18:21,413] (heat-config) [DEBUG] ", > "[2018-06-21 07:18:21,413] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/e9b6658c-6510-4121-b194-f3e7cec48261", > "", > "[2018-06-21 07:18:21,417] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-21 07:18:21,417] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.json < /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.notify.json", > "[2018-06-21 07:18:21,822] (heat-config) [INFO] ", > "[2018-06-21 07:18:21,822] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:18:21,596 p=23396 u=mistral | TASK [Check-mode for Run deployment ControllerArtifactsDeploy] ***************** >2018-06-21 07:18:21,611 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:21,631 p=23396 u=mistral | TASK [include] ***************************************************************** >2018-06-21 07:18:21,835 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/Compute/deployments.yaml for compute-0 >2018-06-21 07:18:21,844 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/Compute/deployments.yaml for compute-0 >2018-06-21 07:18:21,852 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/Compute/deployments.yaml for compute-0 >2018-06-21 07:18:21,860 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/Compute/deployments.yaml for compute-0 >2018-06-21 07:18:21,868 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/Compute/deployments.yaml for compute-0 >2018-06-21 07:18:21,877 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/Compute/deployments.yaml for compute-0 >2018-06-21 07:18:21,885 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/Compute/deployments.yaml for compute-0 >2018-06-21 07:18:21,893 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/Compute/deployments.yaml for compute-0 >2018-06-21 07:18:21,935 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:18:21,992 p=23396 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "47e3bb7e-dbd0-432c-b417-77caf844175a"}, "changed": false} >2018-06-21 07:18:22,011 p=23396 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-06-21 07:18:22,635 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "4fe761dd16a6c1744ee5885e1b08177e41e671e8", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-47e3bb7e-dbd0-432c-b417-77caf844175a", "gid": 0, "group": "root", "md5sum": "2524e9d107bcf31bbe44a7f9f33a92e9", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9259, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579902.07-135583020274469/source", "state": "file", "uid": 0} >2018-06-21 07:18:22,653 p=23396 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-06-21 07:18:22,977 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:18:22,995 p=23396 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-06-21 07:18:23,014 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:23,032 p=23396 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-06-21 07:18:23,049 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:23,066 p=23396 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-06-21 07:18:23,083 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:23,102 p=23396 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-06-21 07:18:43,156 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.notify.json)", "delta": "0:00:19.709535", "end": "2018-06-21 07:18:43.552515", "rc": 0, "start": "2018-06-21 07:18:23.842980", "stderr": "[2018-06-21 07:18:23,869] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.json\n[2018-06-21 07:18:43,151] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.15/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.15/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/21 07:18:24 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/21 07:18:24 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/21 07:18:24 AM] [INFO] Not using any mapping file.\\n[2018/06/21 07:18:24 AM] [INFO] Finding active nics\\n[2018/06/21 07:18:24 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/21 07:18:24 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/21 07:18:24 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/21 07:18:24 AM] [INFO] lo is not an active nic\\n[2018/06/21 07:18:24 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/21 07:18:24 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/21 07:18:24 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/21 07:18:24 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/21 07:18:24 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/21 07:18:24 AM] [INFO] adding interface: eth0\\n[2018/06/21 07:18:24 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/21 07:18:24 AM] [INFO] adding bridge: br-isolated\\n[2018/06/21 07:18:24 AM] [INFO] adding interface: eth1\\n[2018/06/21 07:18:24 AM] [INFO] adding vlan: vlan20\\n[2018/06/21 07:18:24 AM] [INFO] adding vlan: vlan30\\n[2018/06/21 07:18:24 AM] [INFO] adding vlan: vlan50\\n[2018/06/21 07:18:24 AM] [INFO] adding interface: eth2\\n[2018/06/21 07:18:24 AM] [INFO] applying network configs...\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/21 07:18:24 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/21 07:18:25 AM] [INFO] running ifup on interface: eth2\\n[2018/06/21 07:18:25 AM] [INFO] running ifup on interface: eth1\\n[2018/06/21 07:18:25 AM] [INFO] running ifup on interface: eth0\\n[2018/06/21 07:18:29 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/21 07:18:33 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/21 07:18:38 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/21 07:18:42 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/21 07:18:42 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/21 07:18:42 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-06-21 07:18:43,151] (heat-config) [DEBUG] [2018-06-21 07:18:23,892] (heat-config) [INFO] interface_name=nic1\n[2018-06-21 07:18:23,892] (heat-config) [INFO] bridge_name=br-ex\n[2018-06-21 07:18:23,893] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019\n[2018-06-21 07:18:23,893] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-21 07:18:23,893] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-khdfkn36yqgs-0-dpxsps5qjksx-NetworkDeployment-yle4twzvdnzi-TripleOSoftwareDeployment-smqvuunztcz6/abe8a6bc-c9a0-4460-a3ad-bf6b049b1eb3\n[2018-06-21 07:18:23,893] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-21 07:18:23,893] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-21 07:18:23,893] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/47e3bb7e-dbd0-432c-b417-77caf844175a\n[2018-06-21 07:18:43,147] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS\n\n[2018-06-21 07:18:43,147] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.15/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.15/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/06/21 07:18:24 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/06/21 07:18:24 AM] [INFO] Ifcfg net config provider created.\n[2018/06/21 07:18:24 AM] [INFO] Not using any mapping file.\n[2018/06/21 07:18:24 AM] [INFO] Finding active nics\n[2018/06/21 07:18:24 AM] [INFO] eth1 is an embedded active nic\n[2018/06/21 07:18:24 AM] [INFO] eth0 is an embedded active nic\n[2018/06/21 07:18:24 AM] [INFO] eth2 is an embedded active nic\n[2018/06/21 07:18:24 AM] [INFO] lo is not an active nic\n[2018/06/21 07:18:24 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/06/21 07:18:24 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/06/21 07:18:24 AM] [INFO] nic3 mapped to: eth2\n[2018/06/21 07:18:24 AM] [INFO] nic2 mapped to: eth1\n[2018/06/21 07:18:24 AM] [INFO] nic1 mapped to: eth0\n[2018/06/21 07:18:24 AM] [INFO] adding interface: eth0\n[2018/06/21 07:18:24 AM] [INFO] adding custom route for interface: eth0\n[2018/06/21 07:18:24 AM] [INFO] adding bridge: br-isolated\n[2018/06/21 07:18:24 AM] [INFO] adding interface: eth1\n[2018/06/21 07:18:24 AM] [INFO] adding vlan: vlan20\n[2018/06/21 07:18:24 AM] [INFO] adding vlan: vlan30\n[2018/06/21 07:18:24 AM] [INFO] adding vlan: vlan50\n[2018/06/21 07:18:24 AM] [INFO] adding interface: eth2\n[2018/06/21 07:18:24 AM] [INFO] applying network configs...\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan20\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan50\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: eth2\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: eth1\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: eth0\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan20\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan50\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/06/21 07:18:24 AM] [INFO] running ifup on bridge: br-isolated\n[2018/06/21 07:18:25 AM] [INFO] running ifup on interface: eth2\n[2018/06/21 07:18:25 AM] [INFO] running ifup on interface: eth1\n[2018/06/21 07:18:25 AM] [INFO] running ifup on interface: eth0\n[2018/06/21 07:18:29 AM] [INFO] running ifup on interface: vlan20\n[2018/06/21 07:18:33 AM] [INFO] running ifup on interface: vlan30\n[2018/06/21 07:18:38 AM] [INFO] running ifup on interface: vlan50\n[2018/06/21 07:18:42 AM] [INFO] running ifup on interface: vlan20\n[2018/06/21 07:18:42 AM] [INFO] running ifup on interface: vlan30\n[2018/06/21 07:18:42 AM] [INFO] running ifup on interface: vlan50\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.3\n++ '[' -n 192.168.24.3 ']'\n++ break\n++ echo 192.168.24.3\n+ local METADATA_IP=192.168.24.3\n+ '[' -n 192.168.24.3 ']'\n+ is_local_ip 192.168.24.3\n+ local IP_TO_CHECK=192.168.24.3\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.3/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\n+ _ping=ping\n+ [[ 192.168.24.3 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.3\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-06-21 07:18:43,147] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/47e3bb7e-dbd0-432c-b417-77caf844175a\n\n[2018-06-21 07:18:43,151] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-21 07:18:43,152] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.json < /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.notify.json\n[2018-06-21 07:18:43,546] (heat-config) [INFO] \n[2018-06-21 07:18:43,546] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:18:23,869] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.json", "[2018-06-21 07:18:43,151] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.15/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.15/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/21 07:18:24 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/21 07:18:24 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/21 07:18:24 AM] [INFO] Not using any mapping file.\\n[2018/06/21 07:18:24 AM] [INFO] Finding active nics\\n[2018/06/21 07:18:24 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/21 07:18:24 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/21 07:18:24 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/21 07:18:24 AM] [INFO] lo is not an active nic\\n[2018/06/21 07:18:24 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/21 07:18:24 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/21 07:18:24 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/21 07:18:24 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/21 07:18:24 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/21 07:18:24 AM] [INFO] adding interface: eth0\\n[2018/06/21 07:18:24 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/21 07:18:24 AM] [INFO] adding bridge: br-isolated\\n[2018/06/21 07:18:24 AM] [INFO] adding interface: eth1\\n[2018/06/21 07:18:24 AM] [INFO] adding vlan: vlan20\\n[2018/06/21 07:18:24 AM] [INFO] adding vlan: vlan30\\n[2018/06/21 07:18:24 AM] [INFO] adding vlan: vlan50\\n[2018/06/21 07:18:24 AM] [INFO] adding interface: eth2\\n[2018/06/21 07:18:24 AM] [INFO] applying network configs...\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/21 07:18:24 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/21 07:18:25 AM] [INFO] running ifup on interface: eth2\\n[2018/06/21 07:18:25 AM] [INFO] running ifup on interface: eth1\\n[2018/06/21 07:18:25 AM] [INFO] running ifup on interface: eth0\\n[2018/06/21 07:18:29 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/21 07:18:33 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/21 07:18:38 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/21 07:18:42 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/21 07:18:42 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/21 07:18:42 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-06-21 07:18:43,151] (heat-config) [DEBUG] [2018-06-21 07:18:23,892] (heat-config) [INFO] interface_name=nic1", "[2018-06-21 07:18:23,892] (heat-config) [INFO] bridge_name=br-ex", "[2018-06-21 07:18:23,893] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", "[2018-06-21 07:18:23,893] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-21 07:18:23,893] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-khdfkn36yqgs-0-dpxsps5qjksx-NetworkDeployment-yle4twzvdnzi-TripleOSoftwareDeployment-smqvuunztcz6/abe8a6bc-c9a0-4460-a3ad-bf6b049b1eb3", "[2018-06-21 07:18:23,893] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-21 07:18:23,893] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-21 07:18:23,893] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/47e3bb7e-dbd0-432c-b417-77caf844175a", "[2018-06-21 07:18:43,147] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", "", "[2018-06-21 07:18:43,147] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.15/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.15/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/06/21 07:18:24 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/06/21 07:18:24 AM] [INFO] Ifcfg net config provider created.", "[2018/06/21 07:18:24 AM] [INFO] Not using any mapping file.", "[2018/06/21 07:18:24 AM] [INFO] Finding active nics", "[2018/06/21 07:18:24 AM] [INFO] eth1 is an embedded active nic", "[2018/06/21 07:18:24 AM] [INFO] eth0 is an embedded active nic", "[2018/06/21 07:18:24 AM] [INFO] eth2 is an embedded active nic", "[2018/06/21 07:18:24 AM] [INFO] lo is not an active nic", "[2018/06/21 07:18:24 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/06/21 07:18:24 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/06/21 07:18:24 AM] [INFO] nic3 mapped to: eth2", "[2018/06/21 07:18:24 AM] [INFO] nic2 mapped to: eth1", "[2018/06/21 07:18:24 AM] [INFO] nic1 mapped to: eth0", "[2018/06/21 07:18:24 AM] [INFO] adding interface: eth0", "[2018/06/21 07:18:24 AM] [INFO] adding custom route for interface: eth0", "[2018/06/21 07:18:24 AM] [INFO] adding bridge: br-isolated", "[2018/06/21 07:18:24 AM] [INFO] adding interface: eth1", "[2018/06/21 07:18:24 AM] [INFO] adding vlan: vlan20", "[2018/06/21 07:18:24 AM] [INFO] adding vlan: vlan30", "[2018/06/21 07:18:24 AM] [INFO] adding vlan: vlan50", "[2018/06/21 07:18:24 AM] [INFO] adding interface: eth2", "[2018/06/21 07:18:24 AM] [INFO] applying network configs...", "[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan20", "[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan50", "[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: eth2", "[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: eth1", "[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: eth0", "[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan20", "[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan50", "[2018/06/21 07:18:24 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/06/21 07:18:24 AM] [INFO] running ifup on bridge: br-isolated", "[2018/06/21 07:18:25 AM] [INFO] running ifup on interface: eth2", "[2018/06/21 07:18:25 AM] [INFO] running ifup on interface: eth1", "[2018/06/21 07:18:25 AM] [INFO] running ifup on interface: eth0", "[2018/06/21 07:18:29 AM] [INFO] running ifup on interface: vlan20", "[2018/06/21 07:18:33 AM] [INFO] running ifup on interface: vlan30", "[2018/06/21 07:18:38 AM] [INFO] running ifup on interface: vlan50", "[2018/06/21 07:18:42 AM] [INFO] running ifup on interface: vlan20", "[2018/06/21 07:18:42 AM] [INFO] running ifup on interface: vlan30", "[2018/06/21 07:18:42 AM] [INFO] running ifup on interface: vlan50", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.3", "++ '[' -n 192.168.24.3 ']'", "++ break", "++ echo 192.168.24.3", "+ local METADATA_IP=192.168.24.3", "+ '[' -n 192.168.24.3 ']'", "+ is_local_ip 192.168.24.3", "+ local IP_TO_CHECK=192.168.24.3", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.3/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", "+ _ping=ping", "+ [[ 192.168.24.3 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.3", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-06-21 07:18:43,147] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/47e3bb7e-dbd0-432c-b417-77caf844175a", "", "[2018-06-21 07:18:43,151] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-21 07:18:43,152] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.json < /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.notify.json", "[2018-06-21 07:18:43,546] (heat-config) [INFO] ", "[2018-06-21 07:18:43,546] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:18:43,176 p=23396 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-06-21 07:18:43,274 p=23396 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:18:23,869] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.json", > "[2018-06-21 07:18:43,151] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.15/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.15/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/21 07:18:24 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/21 07:18:24 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/21 07:18:24 AM] [INFO] Not using any mapping file.\\n[2018/06/21 07:18:24 AM] [INFO] Finding active nics\\n[2018/06/21 07:18:24 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/21 07:18:24 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/21 07:18:24 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/21 07:18:24 AM] [INFO] lo is not an active nic\\n[2018/06/21 07:18:24 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/21 07:18:24 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/21 07:18:24 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/21 07:18:24 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/21 07:18:24 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/21 07:18:24 AM] [INFO] adding interface: eth0\\n[2018/06/21 07:18:24 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/21 07:18:24 AM] [INFO] adding bridge: br-isolated\\n[2018/06/21 07:18:24 AM] [INFO] adding interface: eth1\\n[2018/06/21 07:18:24 AM] [INFO] adding vlan: vlan20\\n[2018/06/21 07:18:24 AM] [INFO] adding vlan: vlan30\\n[2018/06/21 07:18:24 AM] [INFO] adding vlan: vlan50\\n[2018/06/21 07:18:24 AM] [INFO] adding interface: eth2\\n[2018/06/21 07:18:24 AM] [INFO] applying network configs...\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/21 07:18:24 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/21 07:18:24 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/21 07:18:25 AM] [INFO] running ifup on interface: eth2\\n[2018/06/21 07:18:25 AM] [INFO] running ifup on interface: eth1\\n[2018/06/21 07:18:25 AM] [INFO] running ifup on interface: eth0\\n[2018/06/21 07:18:29 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/21 07:18:33 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/21 07:18:38 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/21 07:18:42 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/21 07:18:42 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/21 07:18:42 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-06-21 07:18:43,151] (heat-config) [DEBUG] [2018-06-21 07:18:23,892] (heat-config) [INFO] interface_name=nic1", > "[2018-06-21 07:18:23,892] (heat-config) [INFO] bridge_name=br-ex", > "[2018-06-21 07:18:23,893] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", > "[2018-06-21 07:18:23,893] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-21 07:18:23,893] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-khdfkn36yqgs-0-dpxsps5qjksx-NetworkDeployment-yle4twzvdnzi-TripleOSoftwareDeployment-smqvuunztcz6/abe8a6bc-c9a0-4460-a3ad-bf6b049b1eb3", > "[2018-06-21 07:18:23,893] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-21 07:18:23,893] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-21 07:18:23,893] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/47e3bb7e-dbd0-432c-b417-77caf844175a", > "[2018-06-21 07:18:43,147] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", > "", > "[2018-06-21 07:18:43,147] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.15/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.15/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/06/21 07:18:24 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/06/21 07:18:24 AM] [INFO] Ifcfg net config provider created.", > "[2018/06/21 07:18:24 AM] [INFO] Not using any mapping file.", > "[2018/06/21 07:18:24 AM] [INFO] Finding active nics", > "[2018/06/21 07:18:24 AM] [INFO] eth1 is an embedded active nic", > "[2018/06/21 07:18:24 AM] [INFO] eth0 is an embedded active nic", > "[2018/06/21 07:18:24 AM] [INFO] eth2 is an embedded active nic", > "[2018/06/21 07:18:24 AM] [INFO] lo is not an active nic", > "[2018/06/21 07:18:24 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/06/21 07:18:24 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/06/21 07:18:24 AM] [INFO] nic3 mapped to: eth2", > "[2018/06/21 07:18:24 AM] [INFO] nic2 mapped to: eth1", > "[2018/06/21 07:18:24 AM] [INFO] nic1 mapped to: eth0", > "[2018/06/21 07:18:24 AM] [INFO] adding interface: eth0", > "[2018/06/21 07:18:24 AM] [INFO] adding custom route for interface: eth0", > "[2018/06/21 07:18:24 AM] [INFO] adding bridge: br-isolated", > "[2018/06/21 07:18:24 AM] [INFO] adding interface: eth1", > "[2018/06/21 07:18:24 AM] [INFO] adding vlan: vlan20", > "[2018/06/21 07:18:24 AM] [INFO] adding vlan: vlan30", > "[2018/06/21 07:18:24 AM] [INFO] adding vlan: vlan50", > "[2018/06/21 07:18:24 AM] [INFO] adding interface: eth2", > "[2018/06/21 07:18:24 AM] [INFO] applying network configs...", > "[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan20", > "[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan50", > "[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: eth2", > "[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: eth1", > "[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: eth0", > "[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan20", > "[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/21 07:18:24 AM] [INFO] running ifdown on interface: vlan50", > "[2018/06/21 07:18:24 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/06/21 07:18:24 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/06/21 07:18:24 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/06/21 07:18:25 AM] [INFO] running ifup on interface: eth2", > "[2018/06/21 07:18:25 AM] [INFO] running ifup on interface: eth1", > "[2018/06/21 07:18:25 AM] [INFO] running ifup on interface: eth0", > "[2018/06/21 07:18:29 AM] [INFO] running ifup on interface: vlan20", > "[2018/06/21 07:18:33 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/21 07:18:38 AM] [INFO] running ifup on interface: vlan50", > "[2018/06/21 07:18:42 AM] [INFO] running ifup on interface: vlan20", > "[2018/06/21 07:18:42 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/21 07:18:42 AM] [INFO] running ifup on interface: vlan50", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.3", > "++ '[' -n 192.168.24.3 ']'", > "++ break", > "++ echo 192.168.24.3", > "+ local METADATA_IP=192.168.24.3", > "+ '[' -n 192.168.24.3 ']'", > "+ is_local_ip 192.168.24.3", > "+ local IP_TO_CHECK=192.168.24.3", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.3/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", > "+ _ping=ping", > "+ [[ 192.168.24.3 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.3", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-06-21 07:18:43,147] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/47e3bb7e-dbd0-432c-b417-77caf844175a", > "", > "[2018-06-21 07:18:43,151] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-21 07:18:43,152] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.json < /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.notify.json", > "[2018-06-21 07:18:43,546] (heat-config) [INFO] ", > "[2018-06-21 07:18:43,546] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:18:43,294 p=23396 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment] ************************* >2018-06-21 07:18:43,309 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:43,326 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:18:43,420 p=23396 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "8b0f844a-269e-4ed6-ae78-6f38b03e2d2e"}, "changed": false} >2018-06-21 07:18:43,439 p=23396 u=mistral | TASK [Render deployment file for NovaComputeUpgradeInitDeployment] ************* >2018-06-21 07:18:44,091 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "73398fa87f0d6e881139ec9fead6d23e4358858a", "dest": "/var/lib/heat-config/tripleo-config-download/NovaComputeUpgradeInitDeployment-8b0f844a-269e-4ed6-ae78-6f38b03e2d2e", "gid": 0, "group": "root", "md5sum": "514d2940593b46d28dc263e5a108335d", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1182, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579923.53-224926361403446/source", "state": "file", "uid": 0} >2018-06-21 07:18:44,110 p=23396 u=mistral | TASK [Check if deployed file exists for NovaComputeUpgradeInitDeployment] ****** >2018-06-21 07:18:44,478 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:18:44,500 p=23396 u=mistral | TASK [Check previous deployment rc for NovaComputeUpgradeInitDeployment] ******* >2018-06-21 07:18:44,516 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:44,534 p=23396 u=mistral | TASK [Remove deployed file for NovaComputeUpgradeInitDeployment when previous deployment failed] *** >2018-06-21 07:18:44,550 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:44,568 p=23396 u=mistral | TASK [Force remove deployed file for NovaComputeUpgradeInitDeployment] ********* >2018-06-21 07:18:44,587 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:44,641 p=23396 u=mistral | TASK [Run deployment NovaComputeUpgradeInitDeployment] ************************* >2018-06-21 07:18:45,429 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.notify.json)", "delta": "0:00:00.450769", "end": "2018-06-21 07:18:45.841254", "rc": 0, "start": "2018-06-21 07:18:45.390485", "stderr": "[2018-06-21 07:18:45,415] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.json\n[2018-06-21 07:18:45,444] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-21 07:18:45,444] (heat-config) [DEBUG] [2018-06-21 07:18:45,436] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019\n[2018-06-21 07:18:45,437] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-21 07:18:45,437] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-khdfkn36yqgs-0-dpxsps5qjksx-NovaComputeUpgradeInitDeployment-rmoezl2i7qyh/f4eb2c20-6c2c-40f3-8332-f45df76ed2ac\n[2018-06-21 07:18:45,437] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-21 07:18:45,437] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-21 07:18:45,437] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e\n[2018-06-21 07:18:45,440] (heat-config) [INFO] \n[2018-06-21 07:18:45,441] (heat-config) [DEBUG] \n[2018-06-21 07:18:45,441] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e\n\n[2018-06-21 07:18:45,444] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-21 07:18:45,444] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.json < /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.notify.json\n[2018-06-21 07:18:45,835] (heat-config) [INFO] \n[2018-06-21 07:18:45,836] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:18:45,415] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.json", "[2018-06-21 07:18:45,444] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-21 07:18:45,444] (heat-config) [DEBUG] [2018-06-21 07:18:45,436] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", "[2018-06-21 07:18:45,437] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-21 07:18:45,437] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-khdfkn36yqgs-0-dpxsps5qjksx-NovaComputeUpgradeInitDeployment-rmoezl2i7qyh/f4eb2c20-6c2c-40f3-8332-f45df76ed2ac", "[2018-06-21 07:18:45,437] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-21 07:18:45,437] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-21 07:18:45,437] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e", "[2018-06-21 07:18:45,440] (heat-config) [INFO] ", "[2018-06-21 07:18:45,441] (heat-config) [DEBUG] ", "[2018-06-21 07:18:45,441] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e", "", "[2018-06-21 07:18:45,444] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-21 07:18:45,444] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.json < /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.notify.json", "[2018-06-21 07:18:45,835] (heat-config) [INFO] ", "[2018-06-21 07:18:45,836] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:18:45,450 p=23396 u=mistral | TASK [Output for NovaComputeUpgradeInitDeployment] ***************************** >2018-06-21 07:18:45,499 p=23396 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:18:45,415] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.json", > "[2018-06-21 07:18:45,444] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-21 07:18:45,444] (heat-config) [DEBUG] [2018-06-21 07:18:45,436] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", > "[2018-06-21 07:18:45,437] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-21 07:18:45,437] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-khdfkn36yqgs-0-dpxsps5qjksx-NovaComputeUpgradeInitDeployment-rmoezl2i7qyh/f4eb2c20-6c2c-40f3-8332-f45df76ed2ac", > "[2018-06-21 07:18:45,437] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-21 07:18:45,437] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-21 07:18:45,437] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e", > "[2018-06-21 07:18:45,440] (heat-config) [INFO] ", > "[2018-06-21 07:18:45,441] (heat-config) [DEBUG] ", > "[2018-06-21 07:18:45,441] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e", > "", > "[2018-06-21 07:18:45,444] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-21 07:18:45,444] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.json < /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.notify.json", > "[2018-06-21 07:18:45,835] (heat-config) [INFO] ", > "[2018-06-21 07:18:45,836] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:18:45,521 p=23396 u=mistral | TASK [Check-mode for Run deployment NovaComputeUpgradeInitDeployment] ********** >2018-06-21 07:18:45,536 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:45,555 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:18:45,689 p=23396 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "e55e7117-4504-4c31-a067-4168ecbaba26"}, "changed": false} >2018-06-21 07:18:45,708 p=23396 u=mistral | TASK [Render deployment file for NovaComputeDeployment] ************************ >2018-06-21 07:18:46,396 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "5d8d2f3cef8fa81a7aca49e891008e35faa2aa4e", "dest": "/var/lib/heat-config/tripleo-config-download/NovaComputeDeployment-e55e7117-4504-4c31-a067-4168ecbaba26", "gid": 0, "group": "root", "md5sum": "4ddcfbec45f0aafc2dd949b7c6c131c1", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 21872, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579925.84-106755109482045/source", "state": "file", "uid": 0} >2018-06-21 07:18:46,415 p=23396 u=mistral | TASK [Check if deployed file exists for NovaComputeDeployment] ***************** >2018-06-21 07:18:46,726 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:18:46,745 p=23396 u=mistral | TASK [Check previous deployment rc for NovaComputeDeployment] ****************** >2018-06-21 07:18:46,763 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:46,782 p=23396 u=mistral | TASK [Remove deployed file for NovaComputeDeployment when previous deployment failed] *** >2018-06-21 07:18:46,799 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:46,819 p=23396 u=mistral | TASK [Force remove deployed file for NovaComputeDeployment] ******************** >2018-06-21 07:18:46,835 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:46,854 p=23396 u=mistral | TASK [Run deployment NovaComputeDeployment] ************************************ >2018-06-21 07:18:47,728 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.notify.json)", "delta": "0:00:00.545428", "end": "2018-06-21 07:18:48.137535", "rc": 0, "start": "2018-06-21 07:18:47.592107", "stderr": "[2018-06-21 07:18:47,617] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.json\n[2018-06-21 07:18:47,733] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-21 07:18:47,733] (heat-config) [DEBUG] \n[2018-06-21 07:18:47,733] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-21 07:18:47,734] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.json < /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.notify.json\n[2018-06-21 07:18:48,131] (heat-config) [INFO] \n[2018-06-21 07:18:48,131] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:18:47,617] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.json", "[2018-06-21 07:18:47,733] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-21 07:18:47,733] (heat-config) [DEBUG] ", "[2018-06-21 07:18:47,733] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-21 07:18:47,734] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.json < /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.notify.json", "[2018-06-21 07:18:48,131] (heat-config) [INFO] ", "[2018-06-21 07:18:48,131] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:18:47,746 p=23396 u=mistral | TASK [Output for NovaComputeDeployment] **************************************** >2018-06-21 07:18:47,792 p=23396 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:18:47,617] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.json", > "[2018-06-21 07:18:47,733] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-21 07:18:47,733] (heat-config) [DEBUG] ", > "[2018-06-21 07:18:47,733] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-21 07:18:47,734] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.json < /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.notify.json", > "[2018-06-21 07:18:48,131] (heat-config) [INFO] ", > "[2018-06-21 07:18:48,131] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:18:47,812 p=23396 u=mistral | TASK [Check-mode for Run deployment NovaComputeDeployment] ********************* >2018-06-21 07:18:47,826 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:47,843 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:18:47,894 p=23396 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "0e127163-28f0-47d0-bb3d-c04dba33c833"}, "changed": false} >2018-06-21 07:18:47,913 p=23396 u=mistral | TASK [Render deployment file for ComputeHostsDeployment] *********************** >2018-06-21 07:18:48,515 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "a107fff6a6358d616f2c52ef6d72b4cd6c18dc93", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeHostsDeployment-0e127163-28f0-47d0-bb3d-c04dba33c833", "gid": 0, "group": "root", "md5sum": "5ec166e072b087088cd4ccebfef196ac", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4079, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579927.96-10674499636182/source", "state": "file", "uid": 0} >2018-06-21 07:18:48,533 p=23396 u=mistral | TASK [Check if deployed file exists for ComputeHostsDeployment] **************** >2018-06-21 07:18:48,861 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:18:48,879 p=23396 u=mistral | TASK [Check previous deployment rc for ComputeHostsDeployment] ***************** >2018-06-21 07:18:48,897 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:48,916 p=23396 u=mistral | TASK [Remove deployed file for ComputeHostsDeployment when previous deployment failed] *** >2018-06-21 07:18:48,934 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:48,953 p=23396 u=mistral | TASK [Force remove deployed file for ComputeHostsDeployment] ******************* >2018-06-21 07:18:48,969 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:48,987 p=23396 u=mistral | TASK [Run deployment ComputeHostsDeployment] *********************************** >2018-06-21 07:18:49,806 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.notify.json)", "delta": "0:00:00.463659", "end": "2018-06-21 07:18:50.187076", "rc": 0, "start": "2018-06-21 07:18:49.723417", "stderr": "[2018-06-21 07:18:49,747] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.json\n[2018-06-21 07:18:49,784] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-06-21 07:18:49,784] (heat-config) [DEBUG] [2018-06-21 07:18:49,768] (heat-config) [INFO] hosts=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-06-21 07:18:49,768] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019\n[2018-06-21 07:18:49,768] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-21 07:18:49,768] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-64c5vxqf332r-0-qa6lkmhpyfxq/2d69a75c-910f-4816-89e8-e10149463aa7\n[2018-06-21 07:18:49,768] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-21 07:18:49,768] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-21 07:18:49,768] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0e127163-28f0-47d0-bb3d-c04dba33c833\n[2018-06-21 07:18:49,780] (heat-config) [INFO] \n[2018-06-21 07:18:49,781] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-06-21 07:18:49,781] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0e127163-28f0-47d0-bb3d-c04dba33c833\n\n[2018-06-21 07:18:49,784] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-21 07:18:49,785] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.json < /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.notify.json\n[2018-06-21 07:18:50,180] (heat-config) [INFO] \n[2018-06-21 07:18:50,180] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:18:49,747] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.json", "[2018-06-21 07:18:49,784] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-06-21 07:18:49,784] (heat-config) [DEBUG] [2018-06-21 07:18:49,768] (heat-config) [INFO] hosts=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-06-21 07:18:49,768] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", "[2018-06-21 07:18:49,768] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-21 07:18:49,768] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-64c5vxqf332r-0-qa6lkmhpyfxq/2d69a75c-910f-4816-89e8-e10149463aa7", "[2018-06-21 07:18:49,768] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-21 07:18:49,768] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-21 07:18:49,768] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0e127163-28f0-47d0-bb3d-c04dba33c833", "[2018-06-21 07:18:49,780] (heat-config) [INFO] ", "[2018-06-21 07:18:49,781] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-06-21 07:18:49,781] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0e127163-28f0-47d0-bb3d-c04dba33c833", "", "[2018-06-21 07:18:49,784] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-21 07:18:49,785] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.json < /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.notify.json", "[2018-06-21 07:18:50,180] (heat-config) [INFO] ", "[2018-06-21 07:18:50,180] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:18:49,833 p=23396 u=mistral | TASK [Output for ComputeHostsDeployment] *************************************** >2018-06-21 07:18:49,944 p=23396 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:18:49,747] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.json", > "[2018-06-21 07:18:49,784] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-06-21 07:18:49,784] (heat-config) [DEBUG] [2018-06-21 07:18:49,768] (heat-config) [INFO] hosts=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-06-21 07:18:49,768] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", > "[2018-06-21 07:18:49,768] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-21 07:18:49,768] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-64c5vxqf332r-0-qa6lkmhpyfxq/2d69a75c-910f-4816-89e8-e10149463aa7", > "[2018-06-21 07:18:49,768] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-21 07:18:49,768] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-21 07:18:49,768] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0e127163-28f0-47d0-bb3d-c04dba33c833", > "[2018-06-21 07:18:49,780] (heat-config) [INFO] ", > "[2018-06-21 07:18:49,781] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-06-21 07:18:49,781] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0e127163-28f0-47d0-bb3d-c04dba33c833", > "", > "[2018-06-21 07:18:49,784] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-21 07:18:49,785] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.json < /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.notify.json", > "[2018-06-21 07:18:50,180] (heat-config) [INFO] ", > "[2018-06-21 07:18:50,180] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:18:49,972 p=23396 u=mistral | TASK [Check-mode for Run deployment ComputeHostsDeployment] ******************** >2018-06-21 07:18:49,987 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:50,006 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:18:50,140 p=23396 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "a972add2-c0e3-41a0-bc13-94b119bc443f"}, "changed": false} >2018-06-21 07:18:50,160 p=23396 u=mistral | TASK [Render deployment file for ComputeAllNodesDeployment] ******************** >2018-06-21 07:18:50,864 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "ccce2301e50a61197875061f2028c9b1b7ee3fea", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeAllNodesDeployment-a972add2-c0e3-41a0-bc13-94b119bc443f", "gid": 0, "group": "root", "md5sum": "ed8774e5721439ea289047202091bb5c", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19022, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579930.3-42035885992747/source", "state": "file", "uid": 0} >2018-06-21 07:18:50,882 p=23396 u=mistral | TASK [Check if deployed file exists for ComputeAllNodesDeployment] ************* >2018-06-21 07:18:51,203 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:18:51,223 p=23396 u=mistral | TASK [Check previous deployment rc for ComputeAllNodesDeployment] ************** >2018-06-21 07:18:51,240 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:51,260 p=23396 u=mistral | TASK [Remove deployed file for ComputeAllNodesDeployment when previous deployment failed] *** >2018-06-21 07:18:51,278 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:51,297 p=23396 u=mistral | TASK [Force remove deployed file for ComputeAllNodesDeployment] **************** >2018-06-21 07:18:51,313 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:51,331 p=23396 u=mistral | TASK [Run deployment ComputeAllNodesDeployment] ******************************** >2018-06-21 07:18:52,203 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/a972add2-c0e3-41a0-bc13-94b119bc443f.notify.json)", "delta": "0:00:00.546301", "end": "2018-06-21 07:18:52.613036", "rc": 0, "start": "2018-06-21 07:18:52.066735", "stderr": "[2018-06-21 07:18:52,090] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/a972add2-c0e3-41a0-bc13-94b119bc443f.json\n[2018-06-21 07:18:52,206] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-21 07:18:52,206] (heat-config) [DEBUG] \n[2018-06-21 07:18:52,206] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-21 07:18:52,207] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a972add2-c0e3-41a0-bc13-94b119bc443f.json < /var/lib/heat-config/deployed/a972add2-c0e3-41a0-bc13-94b119bc443f.notify.json\n[2018-06-21 07:18:52,607] (heat-config) [INFO] \n[2018-06-21 07:18:52,607] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:18:52,090] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/a972add2-c0e3-41a0-bc13-94b119bc443f.json", "[2018-06-21 07:18:52,206] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-21 07:18:52,206] (heat-config) [DEBUG] ", "[2018-06-21 07:18:52,206] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-21 07:18:52,207] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a972add2-c0e3-41a0-bc13-94b119bc443f.json < /var/lib/heat-config/deployed/a972add2-c0e3-41a0-bc13-94b119bc443f.notify.json", "[2018-06-21 07:18:52,607] (heat-config) [INFO] ", "[2018-06-21 07:18:52,607] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:18:52,223 p=23396 u=mistral | TASK [Output for ComputeAllNodesDeployment] ************************************ >2018-06-21 07:18:52,267 p=23396 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:18:52,090] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/a972add2-c0e3-41a0-bc13-94b119bc443f.json", > "[2018-06-21 07:18:52,206] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-21 07:18:52,206] (heat-config) [DEBUG] ", > "[2018-06-21 07:18:52,206] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-21 07:18:52,207] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a972add2-c0e3-41a0-bc13-94b119bc443f.json < /var/lib/heat-config/deployed/a972add2-c0e3-41a0-bc13-94b119bc443f.notify.json", > "[2018-06-21 07:18:52,607] (heat-config) [INFO] ", > "[2018-06-21 07:18:52,607] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:18:52,287 p=23396 u=mistral | TASK [Check-mode for Run deployment ComputeAllNodesDeployment] ***************** >2018-06-21 07:18:52,301 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:52,320 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:18:52,372 p=23396 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "f2540af8-b807-43d6-8d1d-79e70a51b657"}, "changed": false} >2018-06-21 07:18:52,392 p=23396 u=mistral | TASK [Render deployment file for ComputeAllNodesValidationDeployment] ********** >2018-06-21 07:18:52,969 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "f27e5da7a63770b4c2d9d9e30098384ec28a9a3e", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeAllNodesValidationDeployment-f2540af8-b807-43d6-8d1d-79e70a51b657", "gid": 0, "group": "root", "md5sum": "5ffca5901239dc0d3fbdcf735f86ecda", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4934, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579932.44-61543612724248/source", "state": "file", "uid": 0} >2018-06-21 07:18:52,990 p=23396 u=mistral | TASK [Check if deployed file exists for ComputeAllNodesValidationDeployment] *** >2018-06-21 07:18:53,304 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:18:53,323 p=23396 u=mistral | TASK [Check previous deployment rc for ComputeAllNodesValidationDeployment] **** >2018-06-21 07:18:53,341 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:53,361 p=23396 u=mistral | TASK [Remove deployed file for ComputeAllNodesValidationDeployment when previous deployment failed] *** >2018-06-21 07:18:53,380 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:53,398 p=23396 u=mistral | TASK [Force remove deployed file for ComputeAllNodesValidationDeployment] ****** >2018-06-21 07:18:53,415 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:53,433 p=23396 u=mistral | TASK [Run deployment ComputeAllNodesValidationDeployment] ********************** >2018-06-21 07:18:54,717 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.notify.json)", "delta": "0:00:00.947152", "end": "2018-06-21 07:18:55.123041", "rc": 0, "start": "2018-06-21 07:18:54.175889", "stderr": "[2018-06-21 07:18:54,202] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.json\n[2018-06-21 07:18:54,725] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.16 for local network 172.17.1.0/24.\\nPing to 172.17.1.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\\nPing to 172.17.2.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\\nPing to 172.17.3.18 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\\nPing to 192.168.24.8 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-21 07:18:54,725] (heat-config) [DEBUG] [2018-06-21 07:18:54,222] (heat-config) [INFO] ping_test_ips=172.17.3.18 172.17.4.17 172.17.1.16 172.17.2.15 10.0.0.104 192.168.24.8\n[2018-06-21 07:18:54,222] (heat-config) [INFO] validate_fqdn=False\n[2018-06-21 07:18:54,222] (heat-config) [INFO] validate_ntp=True\n[2018-06-21 07:18:54,222] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019\n[2018-06-21 07:18:54,223] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-21 07:18:54,223] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-ckei37fomwyn-0-yfwz3feik3kf/1585afad-b876-4786-a86a-85246cedf14e\n[2018-06-21 07:18:54,223] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-21 07:18:54,223] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-21 07:18:54,223] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f2540af8-b807-43d6-8d1d-79e70a51b657\n[2018-06-21 07:18:54,721] (heat-config) [INFO] Trying to ping 172.17.1.16 for local network 172.17.1.0/24.\nPing to 172.17.1.16 succeeded.\nSUCCESS\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\nPing to 172.17.2.15 succeeded.\nSUCCESS\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\nPing to 172.17.3.18 succeeded.\nSUCCESS\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\nPing to 192.168.24.8 succeeded.\nSUCCESS\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\nSUCCESS\n\n[2018-06-21 07:18:54,721] (heat-config) [DEBUG] \n[2018-06-21 07:18:54,721] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f2540af8-b807-43d6-8d1d-79e70a51b657\n\n[2018-06-21 07:18:54,725] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-21 07:18:54,726] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.json < /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.notify.json\n[2018-06-21 07:18:55,117] (heat-config) [INFO] \n[2018-06-21 07:18:55,117] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:18:54,202] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.json", "[2018-06-21 07:18:54,725] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.16 for local network 172.17.1.0/24.\\nPing to 172.17.1.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\\nPing to 172.17.2.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\\nPing to 172.17.3.18 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\\nPing to 192.168.24.8 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-21 07:18:54,725] (heat-config) [DEBUG] [2018-06-21 07:18:54,222] (heat-config) [INFO] ping_test_ips=172.17.3.18 172.17.4.17 172.17.1.16 172.17.2.15 10.0.0.104 192.168.24.8", "[2018-06-21 07:18:54,222] (heat-config) [INFO] validate_fqdn=False", "[2018-06-21 07:18:54,222] (heat-config) [INFO] validate_ntp=True", "[2018-06-21 07:18:54,222] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", "[2018-06-21 07:18:54,223] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-21 07:18:54,223] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-ckei37fomwyn-0-yfwz3feik3kf/1585afad-b876-4786-a86a-85246cedf14e", "[2018-06-21 07:18:54,223] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-21 07:18:54,223] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-21 07:18:54,223] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f2540af8-b807-43d6-8d1d-79e70a51b657", "[2018-06-21 07:18:54,721] (heat-config) [INFO] Trying to ping 172.17.1.16 for local network 172.17.1.0/24.", "Ping to 172.17.1.16 succeeded.", "SUCCESS", "Trying to ping 172.17.2.15 for local network 172.17.2.0/24.", "Ping to 172.17.2.15 succeeded.", "SUCCESS", "Trying to ping 172.17.3.18 for local network 172.17.3.0/24.", "Ping to 172.17.3.18 succeeded.", "SUCCESS", "Trying to ping 192.168.24.8 for local network 192.168.24.0/24.", "Ping to 192.168.24.8 succeeded.", "SUCCESS", "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", "SUCCESS", "", "[2018-06-21 07:18:54,721] (heat-config) [DEBUG] ", "[2018-06-21 07:18:54,721] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f2540af8-b807-43d6-8d1d-79e70a51b657", "", "[2018-06-21 07:18:54,725] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-21 07:18:54,726] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.json < /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.notify.json", "[2018-06-21 07:18:55,117] (heat-config) [INFO] ", "[2018-06-21 07:18:55,117] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:18:54,741 p=23396 u=mistral | TASK [Output for ComputeAllNodesValidationDeployment] ************************** >2018-06-21 07:18:54,792 p=23396 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:18:54,202] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.json", > "[2018-06-21 07:18:54,725] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.16 for local network 172.17.1.0/24.\\nPing to 172.17.1.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\\nPing to 172.17.2.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\\nPing to 172.17.3.18 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\\nPing to 192.168.24.8 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-21 07:18:54,725] (heat-config) [DEBUG] [2018-06-21 07:18:54,222] (heat-config) [INFO] ping_test_ips=172.17.3.18 172.17.4.17 172.17.1.16 172.17.2.15 10.0.0.104 192.168.24.8", > "[2018-06-21 07:18:54,222] (heat-config) [INFO] validate_fqdn=False", > "[2018-06-21 07:18:54,222] (heat-config) [INFO] validate_ntp=True", > "[2018-06-21 07:18:54,222] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", > "[2018-06-21 07:18:54,223] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-21 07:18:54,223] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-ckei37fomwyn-0-yfwz3feik3kf/1585afad-b876-4786-a86a-85246cedf14e", > "[2018-06-21 07:18:54,223] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-21 07:18:54,223] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-21 07:18:54,223] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f2540af8-b807-43d6-8d1d-79e70a51b657", > "[2018-06-21 07:18:54,721] (heat-config) [INFO] Trying to ping 172.17.1.16 for local network 172.17.1.0/24.", > "Ping to 172.17.1.16 succeeded.", > "SUCCESS", > "Trying to ping 172.17.2.15 for local network 172.17.2.0/24.", > "Ping to 172.17.2.15 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.18 for local network 172.17.3.0/24.", > "Ping to 172.17.3.18 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.8 for local network 192.168.24.0/24.", > "Ping to 192.168.24.8 succeeded.", > "SUCCESS", > "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", > "SUCCESS", > "", > "[2018-06-21 07:18:54,721] (heat-config) [DEBUG] ", > "[2018-06-21 07:18:54,721] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f2540af8-b807-43d6-8d1d-79e70a51b657", > "", > "[2018-06-21 07:18:54,725] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-21 07:18:54,726] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.json < /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.notify.json", > "[2018-06-21 07:18:55,117] (heat-config) [INFO] ", > "[2018-06-21 07:18:55,117] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:18:54,813 p=23396 u=mistral | TASK [Check-mode for Run deployment ComputeAllNodesValidationDeployment] ******* >2018-06-21 07:18:54,830 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:54,848 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:18:54,931 p=23396 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "a65e71e4-ed81-41a4-8893-3eac5cffc60b"}, "changed": false} >2018-06-21 07:18:54,951 p=23396 u=mistral | TASK [Render deployment file for ComputeHostPrepDeployment] ******************** >2018-06-21 07:18:55,627 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "9b8f298ec7fcc76910ae1c371282e1b4fb7e6fb8", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeHostPrepDeployment-a65e71e4-ed81-41a4-8893-3eac5cffc60b", "gid": 0, "group": "root", "md5sum": "e3b0f4b160ba4fc2257d7ca9f20c237f", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 33672, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579935.03-33655291692631/source", "state": "file", "uid": 0} >2018-06-21 07:18:55,647 p=23396 u=mistral | TASK [Check if deployed file exists for ComputeHostPrepDeployment] ************* >2018-06-21 07:18:55,987 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:18:56,007 p=23396 u=mistral | TASK [Check previous deployment rc for ComputeHostPrepDeployment] ************** >2018-06-21 07:18:56,029 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:56,049 p=23396 u=mistral | TASK [Remove deployed file for ComputeHostPrepDeployment when previous deployment failed] *** >2018-06-21 07:18:56,068 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:56,087 p=23396 u=mistral | TASK [Force remove deployed file for ComputeHostPrepDeployment] **************** >2018-06-21 07:18:56,105 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:18:56,124 p=23396 u=mistral | TASK [Run deployment ComputeHostPrepDeployment] ******************************** >2018-06-21 07:19:06,406 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.notify.json)", "delta": "0:00:09.925980", "end": "2018-06-21 07:19:06.803180", "rc": 0, "start": "2018-06-21 07:18:56.877200", "stderr": "[2018-06-21 07:18:56,899] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.json\n[2018-06-21 07:19:06,400] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [Mount Nova NFS Share] ****************************************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/nova)\\nok: [localhost] => (item=/var/lib/libvirt)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [is Instance HA enabled] **************************************************\\nok: [localhost]\\n\\nTASK [prepare Instance HA script directory] ************************************\\nskipping: [localhost]\\n\\nTASK [install Instance HA script that runs nova-compute] ***********************\\nskipping: [localhost]\\n\\nTASK [Get list of instance HA compute nodes] ***********************************\\nskipping: [localhost]\\n\\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\\nskipping: [localhost]\\n\\nTASK [create libvirt persistent data directories] ******************************\\nok: [localhost] => (item=/etc/libvirt)\\nok: [localhost] => (item=/etc/libvirt/secrets)\\nok: [localhost] => (item=/etc/libvirt/qemu)\\nok: [localhost] => (item=/var/lib/libvirt)\\nchanged: [localhost] => (item=/var/log/containers/libvirt)\\n\\nTASK [ensure qemu group is present on the host] ********************************\\nok: [localhost]\\n\\nTASK [ensure qemu user is present on the host] *********************************\\nok: [localhost]\\n\\nTASK [create directory for vhost-user sockets with qemu ownership] *************\\nchanged: [localhost]\\n\\nTASK [check if libvirt is installed] *******************************************\\nchanged: [localhost]\\n\\nTASK [make sure libvirt services are disabled] *********************************\\nchanged: [localhost] => (item=libvirtd.service)\\nchanged: [localhost] => (item=virtlogd.socket)\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=20 changed=12 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \" [WARNING]: Consider using the yum, dnf or zypper module rather than running\\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\\ncan add warn=False to this command task or set command_warnings=False in\\nansible.cfg to get rid of this message.\\n\", \"deploy_status_code\": 0}\n[2018-06-21 07:19:06,400] (heat-config) [DEBUG] [2018-06-21 07:18:56,923] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/a65e71e4-ed81-41a4-8893-3eac5cffc60b_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/a65e71e4-ed81-41a4-8893-3eac5cffc60b_variables.json\n[2018-06-21 07:19:06,396] (heat-config) [INFO] Return code 0\n[2018-06-21 07:19:06,396] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [ceilometer logs readme] **************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/neutron)\n\nTASK [neutron logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}\n...ignoring\n\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\nok: [localhost]\n\nTASK [Stop and disable iscsid.socket service] **********************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [nova logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}\n...ignoring\n\nTASK [Mount Nova NFS Share] ****************************************************\nskipping: [localhost]\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/lib/nova)\nok: [localhost] => (item=/var/lib/libvirt)\n\nTASK [ensure ceph configurations exist] ****************************************\nchanged: [localhost]\n\nTASK [is Instance HA enabled] **************************************************\nok: [localhost]\n\nTASK [prepare Instance HA script directory] ************************************\nskipping: [localhost]\n\nTASK [install Instance HA script that runs nova-compute] ***********************\nskipping: [localhost]\n\nTASK [Get list of instance HA compute nodes] ***********************************\nskipping: [localhost]\n\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\nskipping: [localhost]\n\nTASK [create libvirt persistent data directories] ******************************\nok: [localhost] => (item=/etc/libvirt)\nok: [localhost] => (item=/etc/libvirt/secrets)\nok: [localhost] => (item=/etc/libvirt/qemu)\nok: [localhost] => (item=/var/lib/libvirt)\nchanged: [localhost] => (item=/var/log/containers/libvirt)\n\nTASK [ensure qemu group is present on the host] ********************************\nok: [localhost]\n\nTASK [ensure qemu user is present on the host] *********************************\nok: [localhost]\n\nTASK [create directory for vhost-user sockets with qemu ownership] *************\nchanged: [localhost]\n\nTASK [check if libvirt is installed] *******************************************\nchanged: [localhost]\n\nTASK [make sure libvirt services are disabled] *********************************\nchanged: [localhost] => (item=libvirtd.service)\nchanged: [localhost] => (item=virtlogd.socket)\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=20 changed=12 unreachable=0 failed=0 \n\n\n[2018-06-21 07:19:06,396] (heat-config) [INFO] [WARNING]: Consider using the yum, dnf or zypper module rather than running\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\ncan add warn=False to this command task or set command_warnings=False in\nansible.cfg to get rid of this message.\n\n[2018-06-21 07:19:06,396] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/a65e71e4-ed81-41a4-8893-3eac5cffc60b_playbook.yaml\n\n[2018-06-21 07:19:06,400] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-06-21 07:19:06,401] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.json < /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.notify.json\n[2018-06-21 07:19:06,797] (heat-config) [INFO] \n[2018-06-21 07:19:06,797] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:18:56,899] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.json", "[2018-06-21 07:19:06,400] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [Mount Nova NFS Share] ****************************************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/nova)\\nok: [localhost] => (item=/var/lib/libvirt)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [is Instance HA enabled] **************************************************\\nok: [localhost]\\n\\nTASK [prepare Instance HA script directory] ************************************\\nskipping: [localhost]\\n\\nTASK [install Instance HA script that runs nova-compute] ***********************\\nskipping: [localhost]\\n\\nTASK [Get list of instance HA compute nodes] ***********************************\\nskipping: [localhost]\\n\\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\\nskipping: [localhost]\\n\\nTASK [create libvirt persistent data directories] ******************************\\nok: [localhost] => (item=/etc/libvirt)\\nok: [localhost] => (item=/etc/libvirt/secrets)\\nok: [localhost] => (item=/etc/libvirt/qemu)\\nok: [localhost] => (item=/var/lib/libvirt)\\nchanged: [localhost] => (item=/var/log/containers/libvirt)\\n\\nTASK [ensure qemu group is present on the host] ********************************\\nok: [localhost]\\n\\nTASK [ensure qemu user is present on the host] *********************************\\nok: [localhost]\\n\\nTASK [create directory for vhost-user sockets with qemu ownership] *************\\nchanged: [localhost]\\n\\nTASK [check if libvirt is installed] *******************************************\\nchanged: [localhost]\\n\\nTASK [make sure libvirt services are disabled] *********************************\\nchanged: [localhost] => (item=libvirtd.service)\\nchanged: [localhost] => (item=virtlogd.socket)\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=20 changed=12 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \" [WARNING]: Consider using the yum, dnf or zypper module rather than running\\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\\ncan add warn=False to this command task or set command_warnings=False in\\nansible.cfg to get rid of this message.\\n\", \"deploy_status_code\": 0}", "[2018-06-21 07:19:06,400] (heat-config) [DEBUG] [2018-06-21 07:18:56,923] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/a65e71e4-ed81-41a4-8893-3eac5cffc60b_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/a65e71e4-ed81-41a4-8893-3eac5cffc60b_variables.json", "[2018-06-21 07:19:06,396] (heat-config) [INFO] Return code 0", "[2018-06-21 07:19:06,396] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [ceilometer logs readme] **************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/neutron)", "", "TASK [neutron logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", "...ignoring", "", "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", "ok: [localhost]", "", "TASK [Stop and disable iscsid.socket service] **********************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [nova logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", "...ignoring", "", "TASK [Mount Nova NFS Share] ****************************************************", "skipping: [localhost]", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/lib/nova)", "ok: [localhost] => (item=/var/lib/libvirt)", "", "TASK [ensure ceph configurations exist] ****************************************", "changed: [localhost]", "", "TASK [is Instance HA enabled] **************************************************", "ok: [localhost]", "", "TASK [prepare Instance HA script directory] ************************************", "skipping: [localhost]", "", "TASK [install Instance HA script that runs nova-compute] ***********************", "skipping: [localhost]", "", "TASK [Get list of instance HA compute nodes] ***********************************", "skipping: [localhost]", "", "TASK [If instance HA is enabled on the node activate the evacuation completed check] ***", "skipping: [localhost]", "", "TASK [create libvirt persistent data directories] ******************************", "ok: [localhost] => (item=/etc/libvirt)", "ok: [localhost] => (item=/etc/libvirt/secrets)", "ok: [localhost] => (item=/etc/libvirt/qemu)", "ok: [localhost] => (item=/var/lib/libvirt)", "changed: [localhost] => (item=/var/log/containers/libvirt)", "", "TASK [ensure qemu group is present on the host] ********************************", "ok: [localhost]", "", "TASK [ensure qemu user is present on the host] *********************************", "ok: [localhost]", "", "TASK [create directory for vhost-user sockets with qemu ownership] *************", "changed: [localhost]", "", "TASK [check if libvirt is installed] *******************************************", "changed: [localhost]", "", "TASK [make sure libvirt services are disabled] *********************************", "changed: [localhost] => (item=libvirtd.service)", "changed: [localhost] => (item=virtlogd.socket)", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=20 changed=12 unreachable=0 failed=0 ", "", "", "[2018-06-21 07:19:06,396] (heat-config) [INFO] [WARNING]: Consider using the yum, dnf or zypper module rather than running", "rpm. If you need to use command because yum, dnf or zypper is insufficient you", "can add warn=False to this command task or set command_warnings=False in", "ansible.cfg to get rid of this message.", "", "[2018-06-21 07:19:06,396] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/a65e71e4-ed81-41a4-8893-3eac5cffc60b_playbook.yaml", "", "[2018-06-21 07:19:06,400] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-06-21 07:19:06,401] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.json < /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.notify.json", "[2018-06-21 07:19:06,797] (heat-config) [INFO] ", "[2018-06-21 07:19:06,797] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:19:06,424 p=23396 u=mistral | TASK [Output for ComputeHostPrepDeployment] ************************************ >2018-06-21 07:19:06,473 p=23396 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:18:56,899] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.json", > "[2018-06-21 07:19:06,400] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [Mount Nova NFS Share] ****************************************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/nova)\\nok: [localhost] => (item=/var/lib/libvirt)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [is Instance HA enabled] **************************************************\\nok: [localhost]\\n\\nTASK [prepare Instance HA script directory] ************************************\\nskipping: [localhost]\\n\\nTASK [install Instance HA script that runs nova-compute] ***********************\\nskipping: [localhost]\\n\\nTASK [Get list of instance HA compute nodes] ***********************************\\nskipping: [localhost]\\n\\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\\nskipping: [localhost]\\n\\nTASK [create libvirt persistent data directories] ******************************\\nok: [localhost] => (item=/etc/libvirt)\\nok: [localhost] => (item=/etc/libvirt/secrets)\\nok: [localhost] => (item=/etc/libvirt/qemu)\\nok: [localhost] => (item=/var/lib/libvirt)\\nchanged: [localhost] => (item=/var/log/containers/libvirt)\\n\\nTASK [ensure qemu group is present on the host] ********************************\\nok: [localhost]\\n\\nTASK [ensure qemu user is present on the host] *********************************\\nok: [localhost]\\n\\nTASK [create directory for vhost-user sockets with qemu ownership] *************\\nchanged: [localhost]\\n\\nTASK [check if libvirt is installed] *******************************************\\nchanged: [localhost]\\n\\nTASK [make sure libvirt services are disabled] *********************************\\nchanged: [localhost] => (item=libvirtd.service)\\nchanged: [localhost] => (item=virtlogd.socket)\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=20 changed=12 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \" [WARNING]: Consider using the yum, dnf or zypper module rather than running\\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\\ncan add warn=False to this command task or set command_warnings=False in\\nansible.cfg to get rid of this message.\\n\", \"deploy_status_code\": 0}", > "[2018-06-21 07:19:06,400] (heat-config) [DEBUG] [2018-06-21 07:18:56,923] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/a65e71e4-ed81-41a4-8893-3eac5cffc60b_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/a65e71e4-ed81-41a4-8893-3eac5cffc60b_variables.json", > "[2018-06-21 07:19:06,396] (heat-config) [INFO] Return code 0", > "[2018-06-21 07:19:06,396] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [ceilometer logs readme] **************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/neutron)", > "", > "TASK [neutron logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", > "...ignoring", > "", > "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", > "ok: [localhost]", > "", > "TASK [Stop and disable iscsid.socket service] **********************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [nova logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", > "...ignoring", > "", > "TASK [Mount Nova NFS Share] ****************************************************", > "skipping: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/lib/nova)", > "ok: [localhost] => (item=/var/lib/libvirt)", > "", > "TASK [ensure ceph configurations exist] ****************************************", > "changed: [localhost]", > "", > "TASK [is Instance HA enabled] **************************************************", > "ok: [localhost]", > "", > "TASK [prepare Instance HA script directory] ************************************", > "skipping: [localhost]", > "", > "TASK [install Instance HA script that runs nova-compute] ***********************", > "skipping: [localhost]", > "", > "TASK [Get list of instance HA compute nodes] ***********************************", > "skipping: [localhost]", > "", > "TASK [If instance HA is enabled on the node activate the evacuation completed check] ***", > "skipping: [localhost]", > "", > "TASK [create libvirt persistent data directories] ******************************", > "ok: [localhost] => (item=/etc/libvirt)", > "ok: [localhost] => (item=/etc/libvirt/secrets)", > "ok: [localhost] => (item=/etc/libvirt/qemu)", > "ok: [localhost] => (item=/var/lib/libvirt)", > "changed: [localhost] => (item=/var/log/containers/libvirt)", > "", > "TASK [ensure qemu group is present on the host] ********************************", > "ok: [localhost]", > "", > "TASK [ensure qemu user is present on the host] *********************************", > "ok: [localhost]", > "", > "TASK [create directory for vhost-user sockets with qemu ownership] *************", > "changed: [localhost]", > "", > "TASK [check if libvirt is installed] *******************************************", > "changed: [localhost]", > "", > "TASK [make sure libvirt services are disabled] *********************************", > "changed: [localhost] => (item=libvirtd.service)", > "changed: [localhost] => (item=virtlogd.socket)", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=20 changed=12 unreachable=0 failed=0 ", > "", > "", > "[2018-06-21 07:19:06,396] (heat-config) [INFO] [WARNING]: Consider using the yum, dnf or zypper module rather than running", > "rpm. If you need to use command because yum, dnf or zypper is insufficient you", > "can add warn=False to this command task or set command_warnings=False in", > "ansible.cfg to get rid of this message.", > "", > "[2018-06-21 07:19:06,396] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/a65e71e4-ed81-41a4-8893-3eac5cffc60b_playbook.yaml", > "", > "[2018-06-21 07:19:06,400] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-06-21 07:19:06,401] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.json < /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.notify.json", > "[2018-06-21 07:19:06,797] (heat-config) [INFO] ", > "[2018-06-21 07:19:06,797] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:19:06,493 p=23396 u=mistral | TASK [Check-mode for Run deployment ComputeHostPrepDeployment] ***************** >2018-06-21 07:19:06,505 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:06,523 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:19:06,570 p=23396 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "d5e3aec3-d014-48ff-82ff-fd73b9664e9f"}, "changed": false} >2018-06-21 07:19:06,588 p=23396 u=mistral | TASK [Render deployment file for ComputeArtifactsDeploy] *********************** >2018-06-21 07:19:07,186 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "5d38c68b0f75dce30ee514bc20f687e8722a78ed", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeArtifactsDeploy-d5e3aec3-d014-48ff-82ff-fd73b9664e9f", "gid": 0, "group": "root", "md5sum": "97052ac15c16557a8a42e062b22280d0", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2015, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579946.64-223791668092961/source", "state": "file", "uid": 0} >2018-06-21 07:19:07,206 p=23396 u=mistral | TASK [Check if deployed file exists for ComputeArtifactsDeploy] **************** >2018-06-21 07:19:07,522 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:19:07,543 p=23396 u=mistral | TASK [Check previous deployment rc for ComputeArtifactsDeploy] ***************** >2018-06-21 07:19:07,559 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:07,579 p=23396 u=mistral | TASK [Remove deployed file for ComputeArtifactsDeploy when previous deployment failed] *** >2018-06-21 07:19:07,596 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:07,614 p=23396 u=mistral | TASK [Force remove deployed file for ComputeArtifactsDeploy] ******************* >2018-06-21 07:19:07,629 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:07,648 p=23396 u=mistral | TASK [Run deployment ComputeArtifactsDeploy] *********************************** >2018-06-21 07:19:08,430 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.notify.json)", "delta": "0:00:00.458293", "end": "2018-06-21 07:19:08.839416", "rc": 0, "start": "2018-06-21 07:19:08.381123", "stderr": "[2018-06-21 07:19:08,404] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.json\n[2018-06-21 07:19:08,433] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-21 07:19:08,434] (heat-config) [DEBUG] [2018-06-21 07:19:08,425] (heat-config) [INFO] artifact_urls=\n[2018-06-21 07:19:08,425] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019\n[2018-06-21 07:19:08,425] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-21 07:19:08,425] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-haw7i3vfvlpg-ComputeArtifactsDeploy-bvgvx6drqjy3-0-7nr5oh24jef2/9e2ce371-b47b-4776-b3c2-0ca2a62385c7\n[2018-06-21 07:19:08,425] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-21 07:19:08,425] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-21 07:19:08,425] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/d5e3aec3-d014-48ff-82ff-fd73b9664e9f\n[2018-06-21 07:19:08,430] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-06-21 07:19:08,430] (heat-config) [DEBUG] \n[2018-06-21 07:19:08,431] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/d5e3aec3-d014-48ff-82ff-fd73b9664e9f\n\n[2018-06-21 07:19:08,434] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-21 07:19:08,434] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.json < /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.notify.json\n[2018-06-21 07:19:08,833] (heat-config) [INFO] \n[2018-06-21 07:19:08,833] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:19:08,404] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.json", "[2018-06-21 07:19:08,433] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-21 07:19:08,434] (heat-config) [DEBUG] [2018-06-21 07:19:08,425] (heat-config) [INFO] artifact_urls=", "[2018-06-21 07:19:08,425] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", "[2018-06-21 07:19:08,425] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-21 07:19:08,425] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-haw7i3vfvlpg-ComputeArtifactsDeploy-bvgvx6drqjy3-0-7nr5oh24jef2/9e2ce371-b47b-4776-b3c2-0ca2a62385c7", "[2018-06-21 07:19:08,425] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-21 07:19:08,425] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-21 07:19:08,425] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/d5e3aec3-d014-48ff-82ff-fd73b9664e9f", "[2018-06-21 07:19:08,430] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-06-21 07:19:08,430] (heat-config) [DEBUG] ", "[2018-06-21 07:19:08,431] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/d5e3aec3-d014-48ff-82ff-fd73b9664e9f", "", "[2018-06-21 07:19:08,434] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-21 07:19:08,434] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.json < /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.notify.json", "[2018-06-21 07:19:08,833] (heat-config) [INFO] ", "[2018-06-21 07:19:08,833] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:19:08,448 p=23396 u=mistral | TASK [Output for ComputeArtifactsDeploy] *************************************** >2018-06-21 07:19:08,493 p=23396 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:19:08,404] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.json", > "[2018-06-21 07:19:08,433] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-21 07:19:08,434] (heat-config) [DEBUG] [2018-06-21 07:19:08,425] (heat-config) [INFO] artifact_urls=", > "[2018-06-21 07:19:08,425] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", > "[2018-06-21 07:19:08,425] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-21 07:19:08,425] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-haw7i3vfvlpg-ComputeArtifactsDeploy-bvgvx6drqjy3-0-7nr5oh24jef2/9e2ce371-b47b-4776-b3c2-0ca2a62385c7", > "[2018-06-21 07:19:08,425] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-21 07:19:08,425] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-21 07:19:08,425] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/d5e3aec3-d014-48ff-82ff-fd73b9664e9f", > "[2018-06-21 07:19:08,430] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-06-21 07:19:08,430] (heat-config) [DEBUG] ", > "[2018-06-21 07:19:08,431] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/d5e3aec3-d014-48ff-82ff-fd73b9664e9f", > "", > "[2018-06-21 07:19:08,434] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-21 07:19:08,434] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.json < /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.notify.json", > "[2018-06-21 07:19:08,833] (heat-config) [INFO] ", > "[2018-06-21 07:19:08,833] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:19:08,511 p=23396 u=mistral | TASK [Check-mode for Run deployment ComputeArtifactsDeploy] ******************** >2018-06-21 07:19:08,524 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:08,545 p=23396 u=mistral | TASK [include] ***************************************************************** >2018-06-21 07:19:08,626 p=23396 u=mistral | TASK [include] ***************************************************************** >2018-06-21 07:19:08,714 p=23396 u=mistral | TASK [include] ***************************************************************** >2018-06-21 07:19:08,933 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/CephStorage/deployments.yaml for ceph-0 >2018-06-21 07:19:08,941 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/CephStorage/deployments.yaml for ceph-0 >2018-06-21 07:19:08,948 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/CephStorage/deployments.yaml for ceph-0 >2018-06-21 07:19:08,956 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/CephStorage/deployments.yaml for ceph-0 >2018-06-21 07:19:08,963 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/CephStorage/deployments.yaml for ceph-0 >2018-06-21 07:19:08,971 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/CephStorage/deployments.yaml for ceph-0 >2018-06-21 07:19:08,978 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/CephStorage/deployments.yaml for ceph-0 >2018-06-21 07:19:08,986 p=23396 u=mistral | included: /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/CephStorage/deployments.yaml for ceph-0 >2018-06-21 07:19:09,053 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:19:09,108 p=23396 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "3f5d31fd-1ed4-43e5-9d1a-3866348fbafa"}, "changed": false} >2018-06-21 07:19:09,127 p=23396 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-06-21 07:19:09,719 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "4bed262f05bf9fd9720074015cf870f2b376bdc5", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-3f5d31fd-1ed4-43e5-9d1a-3866348fbafa", "gid": 0, "group": "root", "md5sum": "7249706de9bca0a2769e4983649b08a4", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 8777, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579949.18-51953864714629/source", "state": "file", "uid": 0} >2018-06-21 07:19:09,738 p=23396 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-06-21 07:19:10,048 p=23396 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:19:10,067 p=23396 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-06-21 07:19:10,083 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:10,102 p=23396 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-06-21 07:19:10,118 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:10,137 p=23396 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-06-21 07:19:10,153 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:10,171 p=23396 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-06-21 07:19:25,371 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.notify.json)", "delta": "0:00:14.870621", "end": "2018-06-21 07:19:25.767394", "rc": 0, "start": "2018-06-21 07:19:10.896773", "stderr": "[2018-06-21 07:19:10,921] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.json\n[2018-06-21 07:19:25,362] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/21 07:19:11 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/21 07:19:11 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/21 07:19:11 AM] [INFO] Not using any mapping file.\\n[2018/06/21 07:19:11 AM] [INFO] Finding active nics\\n[2018/06/21 07:19:11 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/21 07:19:11 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/21 07:19:11 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/21 07:19:11 AM] [INFO] lo is not an active nic\\n[2018/06/21 07:19:11 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/21 07:19:11 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/21 07:19:11 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/21 07:19:11 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/21 07:19:11 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/21 07:19:11 AM] [INFO] adding interface: eth0\\n[2018/06/21 07:19:11 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/21 07:19:11 AM] [INFO] adding bridge: br-isolated\\n[2018/06/21 07:19:11 AM] [INFO] adding interface: eth1\\n[2018/06/21 07:19:11 AM] [INFO] adding vlan: vlan30\\n[2018/06/21 07:19:11 AM] [INFO] adding vlan: vlan40\\n[2018/06/21 07:19:11 AM] [INFO] applying network configs...\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/21 07:19:11 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/21 07:19:12 AM] [INFO] running ifup on interface: eth1\\n[2018/06/21 07:19:12 AM] [INFO] running ifup on interface: eth0\\n[2018/06/21 07:19:16 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/21 07:19:20 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/21 07:19:24 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/21 07:19:24 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-06-21 07:19:25,362] (heat-config) [DEBUG] [2018-06-21 07:19:10,941] (heat-config) [INFO] interface_name=nic1\n[2018-06-21 07:19:10,942] (heat-config) [INFO] bridge_name=br-ex\n[2018-06-21 07:19:10,942] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96\n[2018-06-21 07:19:10,942] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-21 07:19:10,942] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-dcrpu75ghvmg-0-jybo3u4pnq7o-NetworkDeployment-kdxxldjgvahy-TripleOSoftwareDeployment-owarhab7awno/526eb1d9-c967-46a7-9d09-85871ebc086e\n[2018-06-21 07:19:10,942] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-21 07:19:10,942] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-21 07:19:10,942] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa\n[2018-06-21 07:19:25,358] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS\n\n[2018-06-21 07:19:25,358] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/06/21 07:19:11 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/06/21 07:19:11 AM] [INFO] Ifcfg net config provider created.\n[2018/06/21 07:19:11 AM] [INFO] Not using any mapping file.\n[2018/06/21 07:19:11 AM] [INFO] Finding active nics\n[2018/06/21 07:19:11 AM] [INFO] eth0 is an embedded active nic\n[2018/06/21 07:19:11 AM] [INFO] eth1 is an embedded active nic\n[2018/06/21 07:19:11 AM] [INFO] eth2 is an embedded active nic\n[2018/06/21 07:19:11 AM] [INFO] lo is not an active nic\n[2018/06/21 07:19:11 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/06/21 07:19:11 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/06/21 07:19:11 AM] [INFO] nic3 mapped to: eth2\n[2018/06/21 07:19:11 AM] [INFO] nic2 mapped to: eth1\n[2018/06/21 07:19:11 AM] [INFO] nic1 mapped to: eth0\n[2018/06/21 07:19:11 AM] [INFO] adding interface: eth0\n[2018/06/21 07:19:11 AM] [INFO] adding custom route for interface: eth0\n[2018/06/21 07:19:11 AM] [INFO] adding bridge: br-isolated\n[2018/06/21 07:19:11 AM] [INFO] adding interface: eth1\n[2018/06/21 07:19:11 AM] [INFO] adding vlan: vlan30\n[2018/06/21 07:19:11 AM] [INFO] adding vlan: vlan40\n[2018/06/21 07:19:11 AM] [INFO] applying network configs...\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan40\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: eth1\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: eth0\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan40\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/06/21 07:19:11 AM] [INFO] running ifup on bridge: br-isolated\n[2018/06/21 07:19:12 AM] [INFO] running ifup on interface: eth1\n[2018/06/21 07:19:12 AM] [INFO] running ifup on interface: eth0\n[2018/06/21 07:19:16 AM] [INFO] running ifup on interface: vlan30\n[2018/06/21 07:19:20 AM] [INFO] running ifup on interface: vlan40\n[2018/06/21 07:19:24 AM] [INFO] running ifup on interface: vlan30\n[2018/06/21 07:19:24 AM] [INFO] running ifup on interface: vlan40\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.3\n++ '[' -n 192.168.24.3 ']'\n++ break\n++ echo 192.168.24.3\n+ local METADATA_IP=192.168.24.3\n+ '[' -n 192.168.24.3 ']'\n+ is_local_ip 192.168.24.3\n+ local IP_TO_CHECK=192.168.24.3\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.3/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\n+ _ping=ping\n+ [[ 192.168.24.3 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.3\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-06-21 07:19:25,358] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa\n\n[2018-06-21 07:19:25,362] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-21 07:19:25,363] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.json < /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.notify.json\n[2018-06-21 07:19:25,761] (heat-config) [INFO] \n[2018-06-21 07:19:25,761] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:19:10,921] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.json", "[2018-06-21 07:19:25,362] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/21 07:19:11 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/21 07:19:11 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/21 07:19:11 AM] [INFO] Not using any mapping file.\\n[2018/06/21 07:19:11 AM] [INFO] Finding active nics\\n[2018/06/21 07:19:11 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/21 07:19:11 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/21 07:19:11 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/21 07:19:11 AM] [INFO] lo is not an active nic\\n[2018/06/21 07:19:11 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/21 07:19:11 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/21 07:19:11 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/21 07:19:11 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/21 07:19:11 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/21 07:19:11 AM] [INFO] adding interface: eth0\\n[2018/06/21 07:19:11 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/21 07:19:11 AM] [INFO] adding bridge: br-isolated\\n[2018/06/21 07:19:11 AM] [INFO] adding interface: eth1\\n[2018/06/21 07:19:11 AM] [INFO] adding vlan: vlan30\\n[2018/06/21 07:19:11 AM] [INFO] adding vlan: vlan40\\n[2018/06/21 07:19:11 AM] [INFO] applying network configs...\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/21 07:19:11 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/21 07:19:12 AM] [INFO] running ifup on interface: eth1\\n[2018/06/21 07:19:12 AM] [INFO] running ifup on interface: eth0\\n[2018/06/21 07:19:16 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/21 07:19:20 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/21 07:19:24 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/21 07:19:24 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-06-21 07:19:25,362] (heat-config) [DEBUG] [2018-06-21 07:19:10,941] (heat-config) [INFO] interface_name=nic1", "[2018-06-21 07:19:10,942] (heat-config) [INFO] bridge_name=br-ex", "[2018-06-21 07:19:10,942] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", "[2018-06-21 07:19:10,942] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-21 07:19:10,942] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-dcrpu75ghvmg-0-jybo3u4pnq7o-NetworkDeployment-kdxxldjgvahy-TripleOSoftwareDeployment-owarhab7awno/526eb1d9-c967-46a7-9d09-85871ebc086e", "[2018-06-21 07:19:10,942] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-21 07:19:10,942] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-21 07:19:10,942] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa", "[2018-06-21 07:19:25,358] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", "", "[2018-06-21 07:19:25,358] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/06/21 07:19:11 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/06/21 07:19:11 AM] [INFO] Ifcfg net config provider created.", "[2018/06/21 07:19:11 AM] [INFO] Not using any mapping file.", "[2018/06/21 07:19:11 AM] [INFO] Finding active nics", "[2018/06/21 07:19:11 AM] [INFO] eth0 is an embedded active nic", "[2018/06/21 07:19:11 AM] [INFO] eth1 is an embedded active nic", "[2018/06/21 07:19:11 AM] [INFO] eth2 is an embedded active nic", "[2018/06/21 07:19:11 AM] [INFO] lo is not an active nic", "[2018/06/21 07:19:11 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/06/21 07:19:11 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/06/21 07:19:11 AM] [INFO] nic3 mapped to: eth2", "[2018/06/21 07:19:11 AM] [INFO] nic2 mapped to: eth1", "[2018/06/21 07:19:11 AM] [INFO] nic1 mapped to: eth0", "[2018/06/21 07:19:11 AM] [INFO] adding interface: eth0", "[2018/06/21 07:19:11 AM] [INFO] adding custom route for interface: eth0", "[2018/06/21 07:19:11 AM] [INFO] adding bridge: br-isolated", "[2018/06/21 07:19:11 AM] [INFO] adding interface: eth1", "[2018/06/21 07:19:11 AM] [INFO] adding vlan: vlan30", "[2018/06/21 07:19:11 AM] [INFO] adding vlan: vlan40", "[2018/06/21 07:19:11 AM] [INFO] applying network configs...", "[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan40", "[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: eth1", "[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: eth0", "[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan40", "[2018/06/21 07:19:11 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/06/21 07:19:11 AM] [INFO] running ifup on bridge: br-isolated", "[2018/06/21 07:19:12 AM] [INFO] running ifup on interface: eth1", "[2018/06/21 07:19:12 AM] [INFO] running ifup on interface: eth0", "[2018/06/21 07:19:16 AM] [INFO] running ifup on interface: vlan30", "[2018/06/21 07:19:20 AM] [INFO] running ifup on interface: vlan40", "[2018/06/21 07:19:24 AM] [INFO] running ifup on interface: vlan30", "[2018/06/21 07:19:24 AM] [INFO] running ifup on interface: vlan40", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.3", "++ '[' -n 192.168.24.3 ']'", "++ break", "++ echo 192.168.24.3", "+ local METADATA_IP=192.168.24.3", "+ '[' -n 192.168.24.3 ']'", "+ is_local_ip 192.168.24.3", "+ local IP_TO_CHECK=192.168.24.3", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.3/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", "+ _ping=ping", "+ [[ 192.168.24.3 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.3", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-06-21 07:19:25,358] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa", "", "[2018-06-21 07:19:25,362] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-21 07:19:25,363] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.json < /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.notify.json", "[2018-06-21 07:19:25,761] (heat-config) [INFO] ", "[2018-06-21 07:19:25,761] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:19:25,390 p=23396 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-06-21 07:19:25,445 p=23396 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:19:10,921] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.json", > "[2018-06-21 07:19:25,362] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/21 07:19:11 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/21 07:19:11 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/21 07:19:11 AM] [INFO] Not using any mapping file.\\n[2018/06/21 07:19:11 AM] [INFO] Finding active nics\\n[2018/06/21 07:19:11 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/21 07:19:11 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/21 07:19:11 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/21 07:19:11 AM] [INFO] lo is not an active nic\\n[2018/06/21 07:19:11 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/21 07:19:11 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/21 07:19:11 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/21 07:19:11 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/21 07:19:11 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/21 07:19:11 AM] [INFO] adding interface: eth0\\n[2018/06/21 07:19:11 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/21 07:19:11 AM] [INFO] adding bridge: br-isolated\\n[2018/06/21 07:19:11 AM] [INFO] adding interface: eth1\\n[2018/06/21 07:19:11 AM] [INFO] adding vlan: vlan30\\n[2018/06/21 07:19:11 AM] [INFO] adding vlan: vlan40\\n[2018/06/21 07:19:11 AM] [INFO] applying network configs...\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/21 07:19:11 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/21 07:19:11 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/21 07:19:12 AM] [INFO] running ifup on interface: eth1\\n[2018/06/21 07:19:12 AM] [INFO] running ifup on interface: eth0\\n[2018/06/21 07:19:16 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/21 07:19:20 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/21 07:19:24 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/21 07:19:24 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-06-21 07:19:25,362] (heat-config) [DEBUG] [2018-06-21 07:19:10,941] (heat-config) [INFO] interface_name=nic1", > "[2018-06-21 07:19:10,942] (heat-config) [INFO] bridge_name=br-ex", > "[2018-06-21 07:19:10,942] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", > "[2018-06-21 07:19:10,942] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-21 07:19:10,942] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-dcrpu75ghvmg-0-jybo3u4pnq7o-NetworkDeployment-kdxxldjgvahy-TripleOSoftwareDeployment-owarhab7awno/526eb1d9-c967-46a7-9d09-85871ebc086e", > "[2018-06-21 07:19:10,942] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-21 07:19:10,942] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-21 07:19:10,942] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa", > "[2018-06-21 07:19:25,358] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", > "", > "[2018-06-21 07:19:25,358] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/06/21 07:19:11 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/06/21 07:19:11 AM] [INFO] Ifcfg net config provider created.", > "[2018/06/21 07:19:11 AM] [INFO] Not using any mapping file.", > "[2018/06/21 07:19:11 AM] [INFO] Finding active nics", > "[2018/06/21 07:19:11 AM] [INFO] eth0 is an embedded active nic", > "[2018/06/21 07:19:11 AM] [INFO] eth1 is an embedded active nic", > "[2018/06/21 07:19:11 AM] [INFO] eth2 is an embedded active nic", > "[2018/06/21 07:19:11 AM] [INFO] lo is not an active nic", > "[2018/06/21 07:19:11 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/06/21 07:19:11 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/06/21 07:19:11 AM] [INFO] nic3 mapped to: eth2", > "[2018/06/21 07:19:11 AM] [INFO] nic2 mapped to: eth1", > "[2018/06/21 07:19:11 AM] [INFO] nic1 mapped to: eth0", > "[2018/06/21 07:19:11 AM] [INFO] adding interface: eth0", > "[2018/06/21 07:19:11 AM] [INFO] adding custom route for interface: eth0", > "[2018/06/21 07:19:11 AM] [INFO] adding bridge: br-isolated", > "[2018/06/21 07:19:11 AM] [INFO] adding interface: eth1", > "[2018/06/21 07:19:11 AM] [INFO] adding vlan: vlan30", > "[2018/06/21 07:19:11 AM] [INFO] adding vlan: vlan40", > "[2018/06/21 07:19:11 AM] [INFO] applying network configs...", > "[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan40", > "[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: eth1", > "[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: eth0", > "[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/21 07:19:11 AM] [INFO] running ifdown on interface: vlan40", > "[2018/06/21 07:19:11 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", > "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", > "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", > "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/06/21 07:19:11 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/06/21 07:19:11 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/06/21 07:19:12 AM] [INFO] running ifup on interface: eth1", > "[2018/06/21 07:19:12 AM] [INFO] running ifup on interface: eth0", > "[2018/06/21 07:19:16 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/21 07:19:20 AM] [INFO] running ifup on interface: vlan40", > "[2018/06/21 07:19:24 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/21 07:19:24 AM] [INFO] running ifup on interface: vlan40", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.3", > "++ '[' -n 192.168.24.3 ']'", > "++ break", > "++ echo 192.168.24.3", > "+ local METADATA_IP=192.168.24.3", > "+ '[' -n 192.168.24.3 ']'", > "+ is_local_ip 192.168.24.3", > "+ local IP_TO_CHECK=192.168.24.3", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.3/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", > "+ _ping=ping", > "+ [[ 192.168.24.3 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.3", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-06-21 07:19:25,358] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa", > "", > "[2018-06-21 07:19:25,362] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-21 07:19:25,363] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.json < /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.notify.json", > "[2018-06-21 07:19:25,761] (heat-config) [INFO] ", > "[2018-06-21 07:19:25,761] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:19:25,465 p=23396 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment] ************************* >2018-06-21 07:19:25,481 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:25,499 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:19:25,549 p=23396 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "a9360f96-7faf-4ae7-aa0f-2872378a2e1d"}, "changed": false} >2018-06-21 07:19:25,568 p=23396 u=mistral | TASK [Render deployment file for CephStorageUpgradeInitDeployment] ************* >2018-06-21 07:19:26,148 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "a3bc96cd0c639fa628eb12d5d1fa19becd1c23a1", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageUpgradeInitDeployment-a9360f96-7faf-4ae7-aa0f-2872378a2e1d", "gid": 0, "group": "root", "md5sum": "ca7b93ad6f5f332d8a96e1ed40edeed9", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1186, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579965.61-271424669137146/source", "state": "file", "uid": 0} >2018-06-21 07:19:26,169 p=23396 u=mistral | TASK [Check if deployed file exists for CephStorageUpgradeInitDeployment] ****** >2018-06-21 07:19:26,482 p=23396 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:19:26,500 p=23396 u=mistral | TASK [Check previous deployment rc for CephStorageUpgradeInitDeployment] ******* >2018-06-21 07:19:26,516 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:26,534 p=23396 u=mistral | TASK [Remove deployed file for CephStorageUpgradeInitDeployment when previous deployment failed] *** >2018-06-21 07:19:26,549 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:26,566 p=23396 u=mistral | TASK [Force remove deployed file for CephStorageUpgradeInitDeployment] ********* >2018-06-21 07:19:26,586 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:26,608 p=23396 u=mistral | TASK [Run deployment CephStorageUpgradeInitDeployment] ************************* >2018-06-21 07:19:27,342 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.notify.json)", "delta": "0:00:00.427303", "end": "2018-06-21 07:19:27.753545", "rc": 0, "start": "2018-06-21 07:19:27.326242", "stderr": "[2018-06-21 07:19:27,349] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.json\n[2018-06-21 07:19:27,374] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-21 07:19:27,374] (heat-config) [DEBUG] [2018-06-21 07:19:27,369] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96\n[2018-06-21 07:19:27,369] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-21 07:19:27,369] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-dcrpu75ghvmg-0-jybo3u4pnq7o-CephStorageUpgradeInitDeployment-z5642y5lqq33/780ea694-dbee-4330-a99e-f1a6d9a4d1d9\n[2018-06-21 07:19:27,369] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-21 07:19:27,369] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-21 07:19:27,369] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/a9360f96-7faf-4ae7-aa0f-2872378a2e1d\n[2018-06-21 07:19:27,371] (heat-config) [INFO] \n[2018-06-21 07:19:27,372] (heat-config) [DEBUG] \n[2018-06-21 07:19:27,372] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/a9360f96-7faf-4ae7-aa0f-2872378a2e1d\n\n[2018-06-21 07:19:27,375] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-21 07:19:27,375] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.json < /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.notify.json\n[2018-06-21 07:19:27,747] (heat-config) [INFO] \n[2018-06-21 07:19:27,748] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:19:27,349] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.json", "[2018-06-21 07:19:27,374] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-21 07:19:27,374] (heat-config) [DEBUG] [2018-06-21 07:19:27,369] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", "[2018-06-21 07:19:27,369] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-21 07:19:27,369] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-dcrpu75ghvmg-0-jybo3u4pnq7o-CephStorageUpgradeInitDeployment-z5642y5lqq33/780ea694-dbee-4330-a99e-f1a6d9a4d1d9", "[2018-06-21 07:19:27,369] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-21 07:19:27,369] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-21 07:19:27,369] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/a9360f96-7faf-4ae7-aa0f-2872378a2e1d", "[2018-06-21 07:19:27,371] (heat-config) [INFO] ", "[2018-06-21 07:19:27,372] (heat-config) [DEBUG] ", "[2018-06-21 07:19:27,372] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/a9360f96-7faf-4ae7-aa0f-2872378a2e1d", "", "[2018-06-21 07:19:27,375] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-21 07:19:27,375] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.json < /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.notify.json", "[2018-06-21 07:19:27,747] (heat-config) [INFO] ", "[2018-06-21 07:19:27,748] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:19:27,362 p=23396 u=mistral | TASK [Output for CephStorageUpgradeInitDeployment] ***************************** >2018-06-21 07:19:27,409 p=23396 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:19:27,349] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.json", > "[2018-06-21 07:19:27,374] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-21 07:19:27,374] (heat-config) [DEBUG] [2018-06-21 07:19:27,369] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", > "[2018-06-21 07:19:27,369] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-21 07:19:27,369] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-dcrpu75ghvmg-0-jybo3u4pnq7o-CephStorageUpgradeInitDeployment-z5642y5lqq33/780ea694-dbee-4330-a99e-f1a6d9a4d1d9", > "[2018-06-21 07:19:27,369] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-21 07:19:27,369] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-21 07:19:27,369] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/a9360f96-7faf-4ae7-aa0f-2872378a2e1d", > "[2018-06-21 07:19:27,371] (heat-config) [INFO] ", > "[2018-06-21 07:19:27,372] (heat-config) [DEBUG] ", > "[2018-06-21 07:19:27,372] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/a9360f96-7faf-4ae7-aa0f-2872378a2e1d", > "", > "[2018-06-21 07:19:27,375] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-21 07:19:27,375] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.json < /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.notify.json", > "[2018-06-21 07:19:27,747] (heat-config) [INFO] ", > "[2018-06-21 07:19:27,748] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:19:27,428 p=23396 u=mistral | TASK [Check-mode for Run deployment CephStorageUpgradeInitDeployment] ********** >2018-06-21 07:19:27,443 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:27,461 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:19:27,548 p=23396 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "ef029c42-2d2f-415f-9bcb-619d07293bc4"}, "changed": false} >2018-06-21 07:19:27,566 p=23396 u=mistral | TASK [Render deployment file for CephStorageDeployment] ************************ >2018-06-21 07:19:28,145 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "ca58089851564f39ffcfafbd040ab9c688eb21ab", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageDeployment-ef029c42-2d2f-415f-9bcb-619d07293bc4", "gid": 0, "group": "root", "md5sum": "14a40e768d13597d21eeb34cb51f5f0a", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9062, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579967.66-62834293635966/source", "state": "file", "uid": 0} >2018-06-21 07:19:28,163 p=23396 u=mistral | TASK [Check if deployed file exists for CephStorageDeployment] ***************** >2018-06-21 07:19:28,460 p=23396 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:19:28,480 p=23396 u=mistral | TASK [Check previous deployment rc for CephStorageDeployment] ****************** >2018-06-21 07:19:28,498 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:28,516 p=23396 u=mistral | TASK [Remove deployed file for CephStorageDeployment when previous deployment failed] *** >2018-06-21 07:19:28,534 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:28,553 p=23396 u=mistral | TASK [Force remove deployed file for CephStorageDeployment] ******************** >2018-06-21 07:19:28,569 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:28,588 p=23396 u=mistral | TASK [Run deployment CephStorageDeployment] ************************************ >2018-06-21 07:19:29,416 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.notify.json)", "delta": "0:00:00.514869", "end": "2018-06-21 07:19:29.826788", "rc": 0, "start": "2018-06-21 07:19:29.311919", "stderr": "[2018-06-21 07:19:29,334] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.json\n[2018-06-21 07:19:29,442] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-21 07:19:29,442] (heat-config) [DEBUG] \n[2018-06-21 07:19:29,442] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-21 07:19:29,443] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.json < /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.notify.json\n[2018-06-21 07:19:29,821] (heat-config) [INFO] \n[2018-06-21 07:19:29,821] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:19:29,334] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.json", "[2018-06-21 07:19:29,442] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-21 07:19:29,442] (heat-config) [DEBUG] ", "[2018-06-21 07:19:29,442] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-21 07:19:29,443] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.json < /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.notify.json", "[2018-06-21 07:19:29,821] (heat-config) [INFO] ", "[2018-06-21 07:19:29,821] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:19:29,435 p=23396 u=mistral | TASK [Output for CephStorageDeployment] **************************************** >2018-06-21 07:19:29,479 p=23396 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:19:29,334] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.json", > "[2018-06-21 07:19:29,442] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-21 07:19:29,442] (heat-config) [DEBUG] ", > "[2018-06-21 07:19:29,442] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-21 07:19:29,443] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.json < /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.notify.json", > "[2018-06-21 07:19:29,821] (heat-config) [INFO] ", > "[2018-06-21 07:19:29,821] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:19:29,496 p=23396 u=mistral | TASK [Check-mode for Run deployment CephStorageDeployment] ********************* >2018-06-21 07:19:29,509 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:29,525 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:19:29,576 p=23396 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9"}, "changed": false} >2018-06-21 07:19:29,594 p=23396 u=mistral | TASK [Render deployment file for CephStorageHostsDeployment] ******************* >2018-06-21 07:19:30,167 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "776cda523bf9267d9e0bff262f11545b2d9ff122", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageHostsDeployment-20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9", "gid": 0, "group": "root", "md5sum": "a0c7eb8cb4afd8ccf003e2d65228f716", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4087, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579969.65-120862348038929/source", "state": "file", "uid": 0} >2018-06-21 07:19:30,187 p=23396 u=mistral | TASK [Check if deployed file exists for CephStorageHostsDeployment] ************ >2018-06-21 07:19:30,503 p=23396 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:19:30,524 p=23396 u=mistral | TASK [Check previous deployment rc for CephStorageHostsDeployment] ************* >2018-06-21 07:19:30,542 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:30,561 p=23396 u=mistral | TASK [Remove deployed file for CephStorageHostsDeployment when previous deployment failed] *** >2018-06-21 07:19:30,579 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:30,598 p=23396 u=mistral | TASK [Force remove deployed file for CephStorageHostsDeployment] *************** >2018-06-21 07:19:30,616 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:30,635 p=23396 u=mistral | TASK [Run deployment CephStorageHostsDeployment] ******************************* >2018-06-21 07:19:31,436 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.notify.json)", "delta": "0:00:00.451264", "end": "2018-06-21 07:19:31.818777", "rc": 0, "start": "2018-06-21 07:19:31.367513", "stderr": "[2018-06-21 07:19:31,390] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.json\n[2018-06-21 07:19:31,424] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-06-21 07:19:31,424] (heat-config) [DEBUG] [2018-06-21 07:19:31,410] (heat-config) [INFO] hosts=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-06-21 07:19:31,410] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96\n[2018-06-21 07:19:31,411] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-21 07:19:31,411] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-2ltrnux7xsrp-0-mxavcgxnktsu/561330f0-d056-44bf-beb3-da80a7f0871d\n[2018-06-21 07:19:31,411] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-21 07:19:31,411] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-21 07:19:31,411] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9\n[2018-06-21 07:19:31,421] (heat-config) [INFO] \n[2018-06-21 07:19:31,421] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-06-21 07:19:31,421] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9\n\n[2018-06-21 07:19:31,425] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-21 07:19:31,425] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.json < /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.notify.json\n[2018-06-21 07:19:31,813] (heat-config) [INFO] \n[2018-06-21 07:19:31,813] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:19:31,390] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.json", "[2018-06-21 07:19:31,424] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-06-21 07:19:31,424] (heat-config) [DEBUG] [2018-06-21 07:19:31,410] (heat-config) [INFO] hosts=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-06-21 07:19:31,410] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", "[2018-06-21 07:19:31,411] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-21 07:19:31,411] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-2ltrnux7xsrp-0-mxavcgxnktsu/561330f0-d056-44bf-beb3-da80a7f0871d", "[2018-06-21 07:19:31,411] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-21 07:19:31,411] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-21 07:19:31,411] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9", "[2018-06-21 07:19:31,421] (heat-config) [INFO] ", "[2018-06-21 07:19:31,421] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-06-21 07:19:31,421] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9", "", "[2018-06-21 07:19:31,425] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-21 07:19:31,425] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.json < /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.notify.json", "[2018-06-21 07:19:31,813] (heat-config) [INFO] ", "[2018-06-21 07:19:31,813] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:19:31,462 p=23396 u=mistral | TASK [Output for CephStorageHostsDeployment] *********************************** >2018-06-21 07:19:31,538 p=23396 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:19:31,390] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.json", > "[2018-06-21 07:19:31,424] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-06-21 07:19:31,424] (heat-config) [DEBUG] [2018-06-21 07:19:31,410] (heat-config) [INFO] hosts=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-06-21 07:19:31,410] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", > "[2018-06-21 07:19:31,411] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-21 07:19:31,411] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-2ltrnux7xsrp-0-mxavcgxnktsu/561330f0-d056-44bf-beb3-da80a7f0871d", > "[2018-06-21 07:19:31,411] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-21 07:19:31,411] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-21 07:19:31,411] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9", > "[2018-06-21 07:19:31,421] (heat-config) [INFO] ", > "[2018-06-21 07:19:31,421] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-06-21 07:19:31,421] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9", > "", > "[2018-06-21 07:19:31,425] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-21 07:19:31,425] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.json < /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.notify.json", > "[2018-06-21 07:19:31,813] (heat-config) [INFO] ", > "[2018-06-21 07:19:31,813] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:19:31,565 p=23396 u=mistral | TASK [Check-mode for Run deployment CephStorageHostsDeployment] **************** >2018-06-21 07:19:31,580 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:31,598 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:19:31,798 p=23396 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "abe93dab-3152-47ad-999a-02a8d5dbe6ef"}, "changed": false} >2018-06-21 07:19:31,817 p=23396 u=mistral | TASK [Render deployment file for CephStorageAllNodesDeployment] **************** >2018-06-21 07:19:32,560 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "e77ac6bcedadd3cf8f4493c2d300f7870c2d8f17", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageAllNodesDeployment-abe93dab-3152-47ad-999a-02a8d5dbe6ef", "gid": 0, "group": "root", "md5sum": "0524fa98a4dd7d09836d2fdc560ac008", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19024, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579972.01-267096619027721/source", "state": "file", "uid": 0} >2018-06-21 07:19:32,578 p=23396 u=mistral | TASK [Check if deployed file exists for CephStorageAllNodesDeployment] ********* >2018-06-21 07:19:32,927 p=23396 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:19:32,947 p=23396 u=mistral | TASK [Check previous deployment rc for CephStorageAllNodesDeployment] ********** >2018-06-21 07:19:32,965 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:32,985 p=23396 u=mistral | TASK [Remove deployed file for CephStorageAllNodesDeployment when previous deployment failed] *** >2018-06-21 07:19:33,002 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:33,021 p=23396 u=mistral | TASK [Force remove deployed file for CephStorageAllNodesDeployment] ************ >2018-06-21 07:19:33,036 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:33,055 p=23396 u=mistral | TASK [Run deployment CephStorageAllNodesDeployment] **************************** >2018-06-21 07:19:33,938 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/abe93dab-3152-47ad-999a-02a8d5dbe6ef.notify.json)", "delta": "0:00:00.525336", "end": "2018-06-21 07:19:34.349197", "rc": 0, "start": "2018-06-21 07:19:33.823861", "stderr": "[2018-06-21 07:19:33,848] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/abe93dab-3152-47ad-999a-02a8d5dbe6ef.json\n[2018-06-21 07:19:33,958] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-21 07:19:33,958] (heat-config) [DEBUG] \n[2018-06-21 07:19:33,958] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-21 07:19:33,958] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/abe93dab-3152-47ad-999a-02a8d5dbe6ef.json < /var/lib/heat-config/deployed/abe93dab-3152-47ad-999a-02a8d5dbe6ef.notify.json\n[2018-06-21 07:19:34,343] (heat-config) [INFO] \n[2018-06-21 07:19:34,343] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:19:33,848] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/abe93dab-3152-47ad-999a-02a8d5dbe6ef.json", "[2018-06-21 07:19:33,958] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-21 07:19:33,958] (heat-config) [DEBUG] ", "[2018-06-21 07:19:33,958] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-21 07:19:33,958] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/abe93dab-3152-47ad-999a-02a8d5dbe6ef.json < /var/lib/heat-config/deployed/abe93dab-3152-47ad-999a-02a8d5dbe6ef.notify.json", "[2018-06-21 07:19:34,343] (heat-config) [INFO] ", "[2018-06-21 07:19:34,343] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:19:33,956 p=23396 u=mistral | TASK [Output for CephStorageAllNodesDeployment] ******************************** >2018-06-21 07:19:34,003 p=23396 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:19:33,848] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/abe93dab-3152-47ad-999a-02a8d5dbe6ef.json", > "[2018-06-21 07:19:33,958] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-21 07:19:33,958] (heat-config) [DEBUG] ", > "[2018-06-21 07:19:33,958] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-21 07:19:33,958] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/abe93dab-3152-47ad-999a-02a8d5dbe6ef.json < /var/lib/heat-config/deployed/abe93dab-3152-47ad-999a-02a8d5dbe6ef.notify.json", > "[2018-06-21 07:19:34,343] (heat-config) [INFO] ", > "[2018-06-21 07:19:34,343] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:19:34,022 p=23396 u=mistral | TASK [Check-mode for Run deployment CephStorageAllNodesDeployment] ************* >2018-06-21 07:19:34,035 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:34,051 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:19:34,104 p=23396 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "56a88551-9d40-42c1-b9c3-82e6b1c065ac"}, "changed": false} >2018-06-21 07:19:34,122 p=23396 u=mistral | TASK [Render deployment file for CephStorageAllNodesValidationDeployment] ****** >2018-06-21 07:19:34,684 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "5d2cc31e9941f5a265d39a4201f859e00bda2848", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageAllNodesValidationDeployment-56a88551-9d40-42c1-b9c3-82e6b1c065ac", "gid": 0, "group": "root", "md5sum": "410f6c2ae27e03ae95d9fb6d21a7cfbb", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4942, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579974.18-244513759374584/source", "state": "file", "uid": 0} >2018-06-21 07:19:34,703 p=23396 u=mistral | TASK [Check if deployed file exists for CephStorageAllNodesValidationDeployment] *** >2018-06-21 07:19:35,001 p=23396 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:19:35,020 p=23396 u=mistral | TASK [Check previous deployment rc for CephStorageAllNodesValidationDeployment] *** >2018-06-21 07:19:35,037 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:35,054 p=23396 u=mistral | TASK [Remove deployed file for CephStorageAllNodesValidationDeployment when previous deployment failed] *** >2018-06-21 07:19:35,072 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:35,089 p=23396 u=mistral | TASK [Force remove deployed file for CephStorageAllNodesValidationDeployment] *** >2018-06-21 07:19:35,104 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:35,123 p=23396 u=mistral | TASK [Run deployment CephStorageAllNodesValidationDeployment] ****************** >2018-06-21 07:19:36,323 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.notify.json)", "delta": "0:00:00.887295", "end": "2018-06-21 07:19:36.730354", "rc": 0, "start": "2018-06-21 07:19:35.843059", "stderr": "[2018-06-21 07:19:35,865] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.json\n[2018-06-21 07:19:36,356] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\\nPing to 10.0.0.104 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\\nPing to 172.17.3.18 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.17 for local network 172.17.4.0/24.\\nPing to 172.17.4.17 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\\nPing to 192.168.24.8 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-21 07:19:36,356] (heat-config) [DEBUG] [2018-06-21 07:19:35,885] (heat-config) [INFO] ping_test_ips=172.17.3.18 172.17.4.17 172.17.1.16 172.17.2.15 10.0.0.104 192.168.24.8\n[2018-06-21 07:19:35,885] (heat-config) [INFO] validate_fqdn=False\n[2018-06-21 07:19:35,885] (heat-config) [INFO] validate_ntp=True\n[2018-06-21 07:19:35,885] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96\n[2018-06-21 07:19:35,885] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-21 07:19:35,885] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-hkjyyirum7ne-0-t433fatyktkn/ab0eaf14-3185-4d7e-835d-9f30093889bb\n[2018-06-21 07:19:35,885] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-21 07:19:35,885] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-21 07:19:35,885] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/56a88551-9d40-42c1-b9c3-82e6b1c065ac\n[2018-06-21 07:19:36,352] (heat-config) [INFO] Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\nPing to 10.0.0.104 succeeded.\nSUCCESS\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\nPing to 172.17.3.18 succeeded.\nSUCCESS\nTrying to ping 172.17.4.17 for local network 172.17.4.0/24.\nPing to 172.17.4.17 succeeded.\nSUCCESS\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\nPing to 192.168.24.8 succeeded.\nSUCCESS\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\nSUCCESS\n\n[2018-06-21 07:19:36,352] (heat-config) [DEBUG] \n[2018-06-21 07:19:36,352] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/56a88551-9d40-42c1-b9c3-82e6b1c065ac\n\n[2018-06-21 07:19:36,356] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-21 07:19:36,356] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.json < /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.notify.json\n[2018-06-21 07:19:36,725] (heat-config) [INFO] \n[2018-06-21 07:19:36,725] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:19:35,865] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.json", "[2018-06-21 07:19:36,356] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\\nPing to 10.0.0.104 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\\nPing to 172.17.3.18 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.17 for local network 172.17.4.0/24.\\nPing to 172.17.4.17 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\\nPing to 192.168.24.8 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-21 07:19:36,356] (heat-config) [DEBUG] [2018-06-21 07:19:35,885] (heat-config) [INFO] ping_test_ips=172.17.3.18 172.17.4.17 172.17.1.16 172.17.2.15 10.0.0.104 192.168.24.8", "[2018-06-21 07:19:35,885] (heat-config) [INFO] validate_fqdn=False", "[2018-06-21 07:19:35,885] (heat-config) [INFO] validate_ntp=True", "[2018-06-21 07:19:35,885] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", "[2018-06-21 07:19:35,885] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-21 07:19:35,885] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-hkjyyirum7ne-0-t433fatyktkn/ab0eaf14-3185-4d7e-835d-9f30093889bb", "[2018-06-21 07:19:35,885] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-21 07:19:35,885] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-21 07:19:35,885] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/56a88551-9d40-42c1-b9c3-82e6b1c065ac", "[2018-06-21 07:19:36,352] (heat-config) [INFO] Trying to ping 10.0.0.104 for local network 10.0.0.0/24.", "Ping to 10.0.0.104 succeeded.", "SUCCESS", "Trying to ping 172.17.3.18 for local network 172.17.3.0/24.", "Ping to 172.17.3.18 succeeded.", "SUCCESS", "Trying to ping 172.17.4.17 for local network 172.17.4.0/24.", "Ping to 172.17.4.17 succeeded.", "SUCCESS", "Trying to ping 192.168.24.8 for local network 192.168.24.0/24.", "Ping to 192.168.24.8 succeeded.", "SUCCESS", "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", "SUCCESS", "", "[2018-06-21 07:19:36,352] (heat-config) [DEBUG] ", "[2018-06-21 07:19:36,352] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/56a88551-9d40-42c1-b9c3-82e6b1c065ac", "", "[2018-06-21 07:19:36,356] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-21 07:19:36,356] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.json < /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.notify.json", "[2018-06-21 07:19:36,725] (heat-config) [INFO] ", "[2018-06-21 07:19:36,725] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:19:36,343 p=23396 u=mistral | TASK [Output for CephStorageAllNodesValidationDeployment] ********************** >2018-06-21 07:19:36,395 p=23396 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:19:35,865] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.json", > "[2018-06-21 07:19:36,356] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\\nPing to 10.0.0.104 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\\nPing to 172.17.3.18 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.17 for local network 172.17.4.0/24.\\nPing to 172.17.4.17 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\\nPing to 192.168.24.8 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-21 07:19:36,356] (heat-config) [DEBUG] [2018-06-21 07:19:35,885] (heat-config) [INFO] ping_test_ips=172.17.3.18 172.17.4.17 172.17.1.16 172.17.2.15 10.0.0.104 192.168.24.8", > "[2018-06-21 07:19:35,885] (heat-config) [INFO] validate_fqdn=False", > "[2018-06-21 07:19:35,885] (heat-config) [INFO] validate_ntp=True", > "[2018-06-21 07:19:35,885] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", > "[2018-06-21 07:19:35,885] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-21 07:19:35,885] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-hkjyyirum7ne-0-t433fatyktkn/ab0eaf14-3185-4d7e-835d-9f30093889bb", > "[2018-06-21 07:19:35,885] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-21 07:19:35,885] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-21 07:19:35,885] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/56a88551-9d40-42c1-b9c3-82e6b1c065ac", > "[2018-06-21 07:19:36,352] (heat-config) [INFO] Trying to ping 10.0.0.104 for local network 10.0.0.0/24.", > "Ping to 10.0.0.104 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.18 for local network 172.17.3.0/24.", > "Ping to 172.17.3.18 succeeded.", > "SUCCESS", > "Trying to ping 172.17.4.17 for local network 172.17.4.0/24.", > "Ping to 172.17.4.17 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.8 for local network 192.168.24.0/24.", > "Ping to 192.168.24.8 succeeded.", > "SUCCESS", > "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", > "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", > "SUCCESS", > "", > "[2018-06-21 07:19:36,352] (heat-config) [DEBUG] ", > "[2018-06-21 07:19:36,352] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/56a88551-9d40-42c1-b9c3-82e6b1c065ac", > "", > "[2018-06-21 07:19:36,356] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-21 07:19:36,356] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.json < /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.notify.json", > "[2018-06-21 07:19:36,725] (heat-config) [INFO] ", > "[2018-06-21 07:19:36,725] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:19:36,416 p=23396 u=mistral | TASK [Check-mode for Run deployment CephStorageAllNodesValidationDeployment] *** >2018-06-21 07:19:36,431 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:36,451 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:19:36,505 p=23396 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "137bc291-65a7-434a-8973-d5bc9ed3db0b"}, "changed": false} >2018-06-21 07:19:36,525 p=23396 u=mistral | TASK [Render deployment file for CephStorageArtifactsDeploy] ******************* >2018-06-21 07:19:37,098 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "f45c0846939b94eb8c667836bed68361dbb5d65c", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageArtifactsDeploy-137bc291-65a7-434a-8973-d5bc9ed3db0b", "gid": 0, "group": "root", "md5sum": "f3593a409ddcc0d1373765e331e25c01", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2023, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579976.58-161953695471234/source", "state": "file", "uid": 0} >2018-06-21 07:19:37,117 p=23396 u=mistral | TASK [Check if deployed file exists for CephStorageArtifactsDeploy] ************ >2018-06-21 07:19:37,423 p=23396 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:19:37,443 p=23396 u=mistral | TASK [Check previous deployment rc for CephStorageArtifactsDeploy] ************* >2018-06-21 07:19:37,461 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:37,479 p=23396 u=mistral | TASK [Remove deployed file for CephStorageArtifactsDeploy when previous deployment failed] *** >2018-06-21 07:19:37,497 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:37,515 p=23396 u=mistral | TASK [Force remove deployed file for CephStorageArtifactsDeploy] *************** >2018-06-21 07:19:37,532 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:37,550 p=23396 u=mistral | TASK [Run deployment CephStorageArtifactsDeploy] ******************************* >2018-06-21 07:19:38,311 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.notify.json)", "delta": "0:00:00.446294", "end": "2018-06-21 07:19:38.720038", "rc": 0, "start": "2018-06-21 07:19:38.273744", "stderr": "[2018-06-21 07:19:38,296] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.json\n[2018-06-21 07:19:38,323] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-21 07:19:38,323] (heat-config) [DEBUG] [2018-06-21 07:19:38,315] (heat-config) [INFO] artifact_urls=\n[2018-06-21 07:19:38,315] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96\n[2018-06-21 07:19:38,315] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-21 07:19:38,315] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-haw7i3vfvlpg-CephStorageArtifactsDeploy-2vfao6bm2v6m-0-m2us6qg4usxt/67ae6c07-53cf-4a05-91c8-d35bce337aaa\n[2018-06-21 07:19:38,316] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-21 07:19:38,316] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-21 07:19:38,316] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/137bc291-65a7-434a-8973-d5bc9ed3db0b\n[2018-06-21 07:19:38,320] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-06-21 07:19:38,320] (heat-config) [DEBUG] \n[2018-06-21 07:19:38,320] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/137bc291-65a7-434a-8973-d5bc9ed3db0b\n\n[2018-06-21 07:19:38,323] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-21 07:19:38,323] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.json < /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.notify.json\n[2018-06-21 07:19:38,714] (heat-config) [INFO] \n[2018-06-21 07:19:38,714] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:19:38,296] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.json", "[2018-06-21 07:19:38,323] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-21 07:19:38,323] (heat-config) [DEBUG] [2018-06-21 07:19:38,315] (heat-config) [INFO] artifact_urls=", "[2018-06-21 07:19:38,315] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", "[2018-06-21 07:19:38,315] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-21 07:19:38,315] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-haw7i3vfvlpg-CephStorageArtifactsDeploy-2vfao6bm2v6m-0-m2us6qg4usxt/67ae6c07-53cf-4a05-91c8-d35bce337aaa", "[2018-06-21 07:19:38,316] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-21 07:19:38,316] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-21 07:19:38,316] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/137bc291-65a7-434a-8973-d5bc9ed3db0b", "[2018-06-21 07:19:38,320] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-06-21 07:19:38,320] (heat-config) [DEBUG] ", "[2018-06-21 07:19:38,320] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/137bc291-65a7-434a-8973-d5bc9ed3db0b", "", "[2018-06-21 07:19:38,323] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-21 07:19:38,323] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.json < /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.notify.json", "[2018-06-21 07:19:38,714] (heat-config) [INFO] ", "[2018-06-21 07:19:38,714] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:19:38,331 p=23396 u=mistral | TASK [Output for CephStorageArtifactsDeploy] *********************************** >2018-06-21 07:19:38,381 p=23396 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:19:38,296] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.json", > "[2018-06-21 07:19:38,323] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-21 07:19:38,323] (heat-config) [DEBUG] [2018-06-21 07:19:38,315] (heat-config) [INFO] artifact_urls=", > "[2018-06-21 07:19:38,315] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", > "[2018-06-21 07:19:38,315] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-21 07:19:38,315] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-haw7i3vfvlpg-CephStorageArtifactsDeploy-2vfao6bm2v6m-0-m2us6qg4usxt/67ae6c07-53cf-4a05-91c8-d35bce337aaa", > "[2018-06-21 07:19:38,316] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-21 07:19:38,316] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-21 07:19:38,316] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/137bc291-65a7-434a-8973-d5bc9ed3db0b", > "[2018-06-21 07:19:38,320] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-06-21 07:19:38,320] (heat-config) [DEBUG] ", > "[2018-06-21 07:19:38,320] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/137bc291-65a7-434a-8973-d5bc9ed3db0b", > "", > "[2018-06-21 07:19:38,323] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-21 07:19:38,323] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.json < /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.notify.json", > "[2018-06-21 07:19:38,714] (heat-config) [INFO] ", > "[2018-06-21 07:19:38,714] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:19:38,399 p=23396 u=mistral | TASK [Check-mode for Run deployment CephStorageArtifactsDeploy] **************** >2018-06-21 07:19:38,417 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:38,437 p=23396 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-21 07:19:38,502 p=23396 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "fea7c44d-af59-48c5-a656-2d6660e43194"}, "changed": false} >2018-06-21 07:19:38,520 p=23396 u=mistral | TASK [Render deployment file for CephStorageHostPrepDeployment] **************** >2018-06-21 07:19:39,107 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "503e19d18dcb56bb669bfa55fcb11151a99ffcfd", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageHostPrepDeployment-fea7c44d-af59-48c5-a656-2d6660e43194", "gid": 0, "group": "root", "md5sum": "f0461953e64ef44ab7462881115e9c7e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19872, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529579978.59-8416118442919/source", "state": "file", "uid": 0} >2018-06-21 07:19:39,126 p=23396 u=mistral | TASK [Check if deployed file exists for CephStorageHostPrepDeployment] ********* >2018-06-21 07:19:39,431 p=23396 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:19:39,450 p=23396 u=mistral | TASK [Check previous deployment rc for CephStorageHostPrepDeployment] ********** >2018-06-21 07:19:39,468 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:39,487 p=23396 u=mistral | TASK [Remove deployed file for CephStorageHostPrepDeployment when previous deployment failed] *** >2018-06-21 07:19:39,504 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:39,522 p=23396 u=mistral | TASK [Force remove deployed file for CephStorageHostPrepDeployment] ************ >2018-06-21 07:19:39,542 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:39,564 p=23396 u=mistral | TASK [Run deployment CephStorageHostPrepDeployment] **************************** >2018-06-21 07:19:44,523 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.notify.json)", "delta": "0:00:04.646809", "end": "2018-06-21 07:19:44.928139", "rc": 0, "start": "2018-06-21 07:19:40.281330", "stderr": "[2018-06-21 07:19:40,303] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.json\n[2018-06-21 07:19:44,526] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-21 07:19:44,526] (heat-config) [DEBUG] [2018-06-21 07:19:40,324] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/fea7c44d-af59-48c5-a656-2d6660e43194_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/fea7c44d-af59-48c5-a656-2d6660e43194_variables.json\n[2018-06-21 07:19:44,522] (heat-config) [INFO] Return code 0\n[2018-06-21 07:19:44,522] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \n\n\n[2018-06-21 07:19:44,522] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/fea7c44d-af59-48c5-a656-2d6660e43194_playbook.yaml\n\n[2018-06-21 07:19:44,526] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-06-21 07:19:44,527] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.json < /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.notify.json\n[2018-06-21 07:19:44,922] (heat-config) [INFO] \n[2018-06-21 07:19:44,922] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-21 07:19:40,303] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.json", "[2018-06-21 07:19:44,526] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-21 07:19:44,526] (heat-config) [DEBUG] [2018-06-21 07:19:40,324] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/fea7c44d-af59-48c5-a656-2d6660e43194_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/fea7c44d-af59-48c5-a656-2d6660e43194_variables.json", "[2018-06-21 07:19:44,522] (heat-config) [INFO] Return code 0", "[2018-06-21 07:19:44,522] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=3 changed=2 unreachable=0 failed=0 ", "", "", "[2018-06-21 07:19:44,522] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/fea7c44d-af59-48c5-a656-2d6660e43194_playbook.yaml", "", "[2018-06-21 07:19:44,526] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-06-21 07:19:44,527] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.json < /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.notify.json", "[2018-06-21 07:19:44,922] (heat-config) [INFO] ", "[2018-06-21 07:19:44,922] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-21 07:19:44,541 p=23396 u=mistral | TASK [Output for CephStorageHostPrepDeployment] ******************************** >2018-06-21 07:19:44,589 p=23396 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-21 07:19:40,303] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.json", > "[2018-06-21 07:19:44,526] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-21 07:19:44,526] (heat-config) [DEBUG] [2018-06-21 07:19:40,324] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/fea7c44d-af59-48c5-a656-2d6660e43194_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/fea7c44d-af59-48c5-a656-2d6660e43194_variables.json", > "[2018-06-21 07:19:44,522] (heat-config) [INFO] Return code 0", > "[2018-06-21 07:19:44,522] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=3 changed=2 unreachable=0 failed=0 ", > "", > "", > "[2018-06-21 07:19:44,522] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/fea7c44d-af59-48c5-a656-2d6660e43194_playbook.yaml", > "", > "[2018-06-21 07:19:44,526] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-06-21 07:19:44,527] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.json < /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.notify.json", > "[2018-06-21 07:19:44,922] (heat-config) [INFO] ", > "[2018-06-21 07:19:44,922] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-21 07:19:44,609 p=23396 u=mistral | TASK [Check-mode for Run deployment CephStorageHostPrepDeployment] ************* >2018-06-21 07:19:44,624 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:44,630 p=23396 u=mistral | PLAY [Host prep steps] ********************************************************* >2018-06-21 07:19:44,664 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:19:44,718 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/aodh) => {"changed": false, "item": "/var/log/containers/aodh", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:44,719 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "item": "/var/log/containers/httpd/aodh-api", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:44,735 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/aodh) => {"changed": false, "item": "/var/log/containers/aodh", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:44,741 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "item": "/var/log/containers/httpd/aodh-api", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:45,028 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/aodh) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/aodh", "mode": "0755", "owner": "root", "path": "/var/log/containers/aodh", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:45,348 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/aodh-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/aodh-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:45,371 p=23396 u=mistral | TASK [aodh logs readme] ******************************************************** >2018-06-21 07:19:45,430 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:45,442 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:45,993 p=23396 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "b6cf6dbe054f430c33d39c1a1a88593536d6e659", "msg": "Destination directory /var/log/aodh does not exist"} >2018-06-21 07:19:45,993 p=23396 u=mistral | ...ignoring >2018-06-21 07:19:46,016 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:19:46,075 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:46,092 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:46,361 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/aodh", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:46,382 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:19:46,435 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:46,448 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:46,722 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/ceilometer", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:46,744 p=23396 u=mistral | TASK [ceilometer logs readme] ************************************************** >2018-06-21 07:19:46,793 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:46,806 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:47,321 p=23396 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3", "msg": "Destination directory /var/log/ceilometer does not exist"} >2018-06-21 07:19:47,321 p=23396 u=mistral | ...ignoring >2018-06-21 07:19:47,342 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:19:47,400 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:47,401 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "item": "/var/log/containers/httpd/cinder-api", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:47,416 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:47,418 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "item": "/var/log/containers/httpd/cinder-api", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:47,730 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:48,026 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/cinder-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/cinder-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:48,050 p=23396 u=mistral | TASK [cinder logs readme] ****************************************************** >2018-06-21 07:19:48,104 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:48,115 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:48,716 p=23396 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292", "msg": "Destination directory /var/log/cinder does not exist"} >2018-06-21 07:19:48,716 p=23396 u=mistral | ...ignoring >2018-06-21 07:19:48,736 p=23396 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-21 07:19:48,797 p=23396 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:48,798 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:48,815 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:48,817 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:49,078 p=23396 u=mistral | ok: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/cinder", "mode": "0755", "owner": "root", "path": "/var/lib/cinder", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:49,365 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:49,388 p=23396 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-06-21 07:19:49,442 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:49,455 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:49,710 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:49,734 p=23396 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-21 07:19:49,790 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:49,806 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:50,061 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:50,084 p=23396 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-21 07:19:50,136 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:50,137 p=23396 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:50,152 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:50,157 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:50,417 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:50,702 p=23396 u=mistral | ok: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/cinder", "mode": "0755", "owner": "root", "path": "/var/lib/cinder", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:50,726 p=23396 u=mistral | TASK [cinder_enable_iscsi_backend fact] **************************************** >2018-06-21 07:19:50,783 p=23396 u=mistral | ok: [controller-0] => {"ansible_facts": {"cinder_enable_iscsi_backend": false}, "changed": false} >2018-06-21 07:19:50,784 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:50,793 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:50,814 p=23396 u=mistral | TASK [cinder create LVM volume group dd] *************************************** >2018-06-21 07:19:50,842 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:50,865 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:50,878 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:50,898 p=23396 u=mistral | TASK [cinder create LVM volume group] ****************************************** >2018-06-21 07:19:50,925 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:50,949 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:50,960 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:50,980 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:19:51,031 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/glance) => {"changed": false, "item": "/var/log/containers/glance", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:51,048 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/glance) => {"changed": false, "item": "/var/log/containers/glance", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:51,319 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/glance) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/glance", "mode": "0755", "owner": "root", "path": "/var/log/containers/glance", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:51,342 p=23396 u=mistral | TASK [glance logs readme] ****************************************************** >2018-06-21 07:19:51,400 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:51,412 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:51,935 p=23396 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "e368ae3272baeb19e1113009ea5dae00e797c919", "msg": "Destination directory /var/log/glance does not exist"} >2018-06-21 07:19:51,935 p=23396 u=mistral | ...ignoring >2018-06-21 07:19:51,957 p=23396 u=mistral | TASK [set_fact] **************************************************************** >2018-06-21 07:19:51,986 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,010 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,022 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,044 p=23396 u=mistral | TASK [file] ******************************************************************** >2018-06-21 07:19:52,072 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,096 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,108 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,130 p=23396 u=mistral | TASK [stat] ******************************************************************** >2018-06-21 07:19:52,159 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,185 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,196 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,217 p=23396 u=mistral | TASK [copy] ******************************************************************** >2018-06-21 07:19:52,246 p=23396 u=mistral | skipping: [controller-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,270 p=23396 u=mistral | skipping: [compute-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,285 p=23396 u=mistral | skipping: [ceph-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,305 p=23396 u=mistral | TASK [mount] ******************************************************************* >2018-06-21 07:19:52,336 p=23396 u=mistral | skipping: [controller-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,363 p=23396 u=mistral | skipping: [compute-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,377 p=23396 u=mistral | skipping: [ceph-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,398 p=23396 u=mistral | TASK [Mount Node Staging Location] ********************************************* >2018-06-21 07:19:52,426 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,448 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,459 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,479 p=23396 u=mistral | TASK [Mount NFS on host] ******************************************************* >2018-06-21 07:19:52,506 p=23396 u=mistral | skipping: [controller-0] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) => {"changed": false, "item": {"NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0", "NFS_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,531 p=23396 u=mistral | skipping: [compute-0] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) => {"changed": false, "item": {"NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0", "NFS_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,547 p=23396 u=mistral | skipping: [ceph-0] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) => {"changed": false, "item": {"NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0", "NFS_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,569 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:19:52,627 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "item": "/var/log/containers/gnocchi", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,632 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "item": "/var/log/containers/httpd/gnocchi-api", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,643 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "item": "/var/log/containers/gnocchi", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,648 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "item": "/var/log/containers/httpd/gnocchi-api", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:52,914 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/gnocchi", "mode": "0755", "owner": "root", "path": "/var/log/containers/gnocchi", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:53,221 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/gnocchi-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/gnocchi-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:53,244 p=23396 u=mistral | TASK [gnocchi logs readme] ***************************************************** >2018-06-21 07:19:53,297 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:53,312 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:53,849 p=23396 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "2f6114e0f135d7222e70a07579ab0b2b6f967ff8", "msg": "Destination directory /var/log/gnocchi does not exist"} >2018-06-21 07:19:53,849 p=23396 u=mistral | ...ignoring >2018-06-21 07:19:53,872 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:19:53,929 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:53,943 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,219 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/gnocchi", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:54,241 p=23396 u=mistral | TASK [get parameters] ********************************************************** >2018-06-21 07:19:54,292 p=23396 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:19:54,293 p=23396 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:19:54,304 p=23396 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:19:54,325 p=23396 u=mistral | TASK [get DeployedSSLCertificatePath attributes] ******************************* >2018-06-21 07:19:54,353 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,380 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,390 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,411 p=23396 u=mistral | TASK [Assign bootstrap node] *************************************************** >2018-06-21 07:19:54,438 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,463 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,474 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,494 p=23396 u=mistral | TASK [set is_bootstrap_node fact] ********************************************** >2018-06-21 07:19:54,523 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,547 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,559 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,581 p=23396 u=mistral | TASK [get haproxy status] ****************************************************** >2018-06-21 07:19:54,608 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,632 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,643 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,663 p=23396 u=mistral | TASK [get pacemaker status] **************************************************** >2018-06-21 07:19:54,689 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,712 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,728 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,750 p=23396 u=mistral | TASK [get docker status] ******************************************************* >2018-06-21 07:19:54,778 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,802 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,814 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,835 p=23396 u=mistral | TASK [get container_id] ******************************************************** >2018-06-21 07:19:54,862 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,885 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,897 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,918 p=23396 u=mistral | TASK [get pcs resource name for haproxy container] ***************************** >2018-06-21 07:19:54,945 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,968 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:54,980 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,000 p=23396 u=mistral | TASK [remove DeployedSSLCertificatePath if is dir] ***************************** >2018-06-21 07:19:55,030 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,055 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,066 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,086 p=23396 u=mistral | TASK [push certificate content] ************************************************ >2018-06-21 07:19:55,114 p=23396 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:19:55,137 p=23396 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:19:55,150 p=23396 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:19:55,171 p=23396 u=mistral | TASK [set certificate ownership] *********************************************** >2018-06-21 07:19:55,198 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,221 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,232 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,255 p=23396 u=mistral | TASK [reload haproxy if enabled] *********************************************** >2018-06-21 07:19:55,284 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,311 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,323 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,376 p=23396 u=mistral | TASK [restart pacemaker resource for haproxy] ********************************** >2018-06-21 07:19:55,407 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,431 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,442 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,463 p=23396 u=mistral | TASK [set kolla_dir fact] ****************************************************** >2018-06-21 07:19:55,490 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,513 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,526 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,548 p=23396 u=mistral | TASK [set certificate group on host via container] ***************************** >2018-06-21 07:19:55,576 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,602 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,615 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,638 p=23396 u=mistral | TASK [copy certificate from kolla directory to final location] ***************** >2018-06-21 07:19:55,671 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,699 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,710 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,732 p=23396 u=mistral | TASK [send restart order to haproxy container] ********************************* >2018-06-21 07:19:55,763 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,787 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,799 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,820 p=23396 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-21 07:19:55,870 p=23396 u=mistral | skipping: [compute-0] => (item=/var/lib/haproxy) => {"changed": false, "item": "/var/lib/haproxy", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:55,886 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/lib/haproxy) => {"changed": false, "item": "/var/lib/haproxy", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:56,173 p=23396 u=mistral | ok: [controller-0] => (item=/var/lib/haproxy) => {"changed": false, "gid": 188, "group": "haproxy", "item": "/var/lib/haproxy", "mode": "0755", "owner": "haproxy", "path": "/var/lib/haproxy", "secontext": "system_u:object_r:haproxy_var_lib_t:s0", "size": 6, "state": "directory", "uid": 188} >2018-06-21 07:19:56,195 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:19:56,248 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:56,249 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "item": "/var/log/containers/httpd/heat-api", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:56,266 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:56,271 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "item": "/var/log/containers/httpd/heat-api", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:56,540 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/heat) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/heat", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:56,840 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/heat-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/heat-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:56,863 p=23396 u=mistral | TASK [heat logs readme] ******************************************************** >2018-06-21 07:19:56,917 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:56,930 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:57,465 p=23396 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "d30ca3bda176434d31659e7379616dd162ddb246", "msg": "Destination directory /var/log/heat does not exist"} >2018-06-21 07:19:57,465 p=23396 u=mistral | ...ignoring >2018-06-21 07:19:57,488 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:19:57,550 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:57,551 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "item": "/var/log/containers/httpd/heat-api-cfn", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:57,568 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:57,572 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "item": "/var/log/containers/httpd/heat-api-cfn", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:57,840 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/heat) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/heat", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:58,141 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/heat-api-cfn", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/heat-api-cfn", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:58,165 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:19:58,217 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:58,233 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:58,498 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:58,521 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:19:58,578 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/horizon) => {"changed": false, "item": "/var/log/containers/horizon", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:58,579 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "item": "/var/log/containers/httpd/horizon", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:58,595 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/horizon) => {"changed": false, "item": "/var/log/containers/horizon", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:58,602 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "item": "/var/log/containers/httpd/horizon", "skip_reason": "Conditional result was False"} >2018-06-21 07:19:58,862 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/horizon) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/horizon", "mode": "0755", "owner": "root", "path": "/var/log/containers/horizon", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:59,175 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/horizon", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/horizon", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:19:59,198 p=23396 u=mistral | TASK [horizon logs readme] ***************************************************** >2018-06-21 07:19:59,250 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:59,265 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:59,821 p=23396 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ac324739761cb36b925d6e309482e26f7fe49b91", "msg": "Destination directory /var/log/horizon does not exist"} >2018-06-21 07:19:59,821 p=23396 u=mistral | ...ignoring >2018-06-21 07:19:59,845 p=23396 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-06-21 07:19:59,903 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:19:59,917 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:00,213 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"atime": 1529579888.8714068, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "424de87cd6ae66547b285288742255731a46ab83", "ctime": 1529433183.0936344, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 5335882, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1513292517.0, "nlink": 1, "path": "/lib/systemd/system/iscsid.socket", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 175, "uid": 0, "version": "18446744072695807771", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-06-21 07:20:00,237 p=23396 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-06-21 07:20:00,291 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:00,303 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:00,697 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "enabled": false, "name": "iscsid.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "-.slice sysinit.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Backlog": "128", "Before": "shutdown.target iscsid.service sockets.target", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Open-iSCSI iscsid Socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "Documentation": "man:iscsid(8) man:iscsiadm(8)", "FragmentPath": "/usr/lib/systemd/system/iscsid.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "iscsid.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "127793", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127793", "LimitSTACK": "18446744073709551615", "ListenStream": "@ISCSIADM_ABSTRACT_NAMESPACE", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "iscsid.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "Requires": "sysinit.target", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "iscsid.service", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "-.slice"}} >2018-06-21 07:20:00,720 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:20:00,772 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/keystone) => {"changed": false, "item": "/var/log/containers/keystone", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:00,773 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "item": "/var/log/containers/httpd/keystone", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:00,788 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/keystone) => {"changed": false, "item": "/var/log/containers/keystone", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:00,792 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "item": "/var/log/containers/httpd/keystone", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:01,077 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/keystone) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/keystone", "mode": "0755", "owner": "root", "path": "/var/log/containers/keystone", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:01,388 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/keystone", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/keystone", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:01,411 p=23396 u=mistral | TASK [keystone logs readme] **************************************************** >2018-06-21 07:20:01,469 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:01,484 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:02,031 p=23396 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "910be882addb6df99267e9bd303f6d9bf658562e", "msg": "Destination directory /var/log/keystone does not exist"} >2018-06-21 07:20:02,031 p=23396 u=mistral | ...ignoring >2018-06-21 07:20:02,053 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:20:02,107 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:02,122 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:02,395 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/memcached", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:02,416 p=23396 u=mistral | TASK [memcached logs readme] *************************************************** >2018-06-21 07:20:02,471 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:02,486 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:02,973 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "f72ee86fbe604c83734785fe970323e58e3fad9e", "dest": "/var/log/memcached-readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/memcached-readme.txt", "secontext": "system_u:object_r:var_log_t:s0", "size": 86, "state": "file", "uid": 0} >2018-06-21 07:20:02,995 p=23396 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-21 07:20:03,049 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/mysql) => {"changed": false, "item": "/var/log/containers/mysql", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:03,050 p=23396 u=mistral | skipping: [compute-0] => (item=/var/lib/mysql) => {"changed": false, "item": "/var/lib/mysql", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:03,066 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/mysql) => {"changed": false, "item": "/var/log/containers/mysql", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:03,072 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/lib/mysql) => {"changed": false, "item": "/var/lib/mysql", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:03,340 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/mysql) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/mysql", "mode": "0755", "owner": "root", "path": "/var/log/containers/mysql", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:03,653 p=23396 u=mistral | ok: [controller-0] => (item=/var/lib/mysql) => {"changed": false, "gid": 27, "group": "mysql", "item": "/var/lib/mysql", "mode": "0755", "owner": "mysql", "path": "/var/lib/mysql", "secontext": "system_u:object_r:mysqld_db_t:s0", "size": 6, "state": "directory", "uid": 27} >2018-06-21 07:20:03,675 p=23396 u=mistral | TASK [mysql logs readme] ******************************************************* >2018-06-21 07:20:03,728 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:03,741 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:04,230 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "de8fb5fe96200ab286121f8a09419702bd693743", "dest": "/var/log/mariadb/readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/mariadb/readme.txt", "secontext": "system_u:object_r:mysqld_log_t:s0", "size": 78, "state": "file", "uid": 0} >2018-06-21 07:20:04,251 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:20:04,302 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:04,303 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "item": "/var/log/containers/httpd/neutron-api", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:04,325 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:04,326 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "item": "/var/log/containers/httpd/neutron-api", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:04,596 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:04,902 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/neutron-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/neutron-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:04,923 p=23396 u=mistral | TASK [neutron logs readme] ***************************************************** >2018-06-21 07:20:04,982 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:04,994 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:05,531 p=23396 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "f5a95f434a4aad25a9a81a045dec39159a6e8864", "msg": "Destination directory /var/log/neutron does not exist"} >2018-06-21 07:20:05,532 p=23396 u=mistral | ...ignoring >2018-06-21 07:20:05,553 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:20:05,605 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:05,621 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:05,897 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:05,924 p=23396 u=mistral | TASK [create /var/lib/neutron] ************************************************* >2018-06-21 07:20:05,976 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:05,988 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:06,305 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/neutron", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:06,326 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:20:06,381 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:06,382 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "item": "/var/log/containers/httpd/nova-api", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:06,397 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:06,398 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "item": "/var/log/containers/httpd/nova-api", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:06,717 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/nova", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:07,023 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/nova-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/nova-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:07,048 p=23396 u=mistral | TASK [nova logs readme] ******************************************************** >2018-06-21 07:20:07,139 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:07,150 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:07,680 p=23396 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "c2216cc4edf5d3ce90f10748c3243db4e1842a85", "msg": "Destination directory /var/log/nova does not exist"} >2018-06-21 07:20:07,680 p=23396 u=mistral | ...ignoring >2018-06-21 07:20:07,701 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:20:07,751 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:07,764 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:08,037 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:08,059 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:20:08,112 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:08,113 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "item": "/var/log/containers/httpd/nova-placement", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:08,127 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:08,133 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "item": "/var/log/containers/httpd/nova-placement", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:08,403 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/nova", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:08,711 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/nova-placement", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/nova-placement", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:08,735 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:20:08,791 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/panko) => {"changed": false, "item": "/var/log/containers/panko", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:08,792 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "item": "/var/log/containers/httpd/panko-api", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:08,807 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/panko) => {"changed": false, "item": "/var/log/containers/panko", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:08,811 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "item": "/var/log/containers/httpd/panko-api", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:09,082 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/panko) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/panko", "mode": "0755", "owner": "root", "path": "/var/log/containers/panko", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:09,388 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/panko-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/panko-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:09,410 p=23396 u=mistral | TASK [panko logs readme] ******************************************************* >2018-06-21 07:20:09,465 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:09,478 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:10,013 p=23396 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "903397bbd82e9b1f53087e3d7e8975d851857ce2", "msg": "Destination directory /var/log/panko does not exist"} >2018-06-21 07:20:10,013 p=23396 u=mistral | ...ignoring >2018-06-21 07:20:10,035 p=23396 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-21 07:20:10,087 p=23396 u=mistral | skipping: [compute-0] => (item=/var/lib/rabbitmq) => {"changed": false, "item": "/var/lib/rabbitmq", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:10,088 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "item": "/var/log/containers/rabbitmq", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:10,103 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/lib/rabbitmq) => {"changed": false, "item": "/var/lib/rabbitmq", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:10,108 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "item": "/var/log/containers/rabbitmq", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:10,377 p=23396 u=mistral | ok: [controller-0] => (item=/var/lib/rabbitmq) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/rabbitmq", "mode": "0755", "owner": "root", "path": "/var/lib/rabbitmq", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:10,684 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/rabbitmq", "mode": "0755", "owner": "root", "path": "/var/log/containers/rabbitmq", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:10,708 p=23396 u=mistral | TASK [rabbitmq logs readme] **************************************************** >2018-06-21 07:20:10,762 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:10,775 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:11,304 p=23396 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ee241f2199f264c9d0f384cf389fe255e8bf8a77", "msg": "Destination directory /var/log/rabbitmq does not exist"} >2018-06-21 07:20:11,305 p=23396 u=mistral | ...ignoring >2018-06-21 07:20:11,326 p=23396 u=mistral | TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] *** >2018-06-21 07:20:11,377 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:11,391 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:11,687 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "echo 'export ERL_EPMD_ADDRESS=127.0.0.1' > /etc/rabbitmq/rabbitmq-env.conf\n echo 'export ERL_EPMD_PORT=4370' >> /etc/rabbitmq/rabbitmq-env.conf\n for pid in $(pgrep epmd --ns 1 --nslist pid); do kill $pid; done", "delta": "0:00:00.022963", "end": "2018-06-21 07:20:12.102029", "rc": 0, "start": "2018-06-21 07:20:12.079066", "stderr": "/bin/sh: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory\n/bin/sh: line 1: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory", "stderr_lines": ["/bin/sh: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory", "/bin/sh: line 1: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory"], "stdout": "", "stdout_lines": []} >2018-06-21 07:20:11,709 p=23396 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-21 07:20:11,767 p=23396 u=mistral | skipping: [compute-0] => (item=/var/lib/redis) => {"changed": false, "item": "/var/lib/redis", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:11,768 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/redis) => {"changed": false, "item": "/var/log/containers/redis", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:11,769 p=23396 u=mistral | skipping: [compute-0] => (item=/var/run/redis) => {"changed": false, "item": "/var/run/redis", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:11,784 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/lib/redis) => {"changed": false, "item": "/var/lib/redis", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:11,792 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/redis) => {"changed": false, "item": "/var/log/containers/redis", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:11,793 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/run/redis) => {"changed": false, "item": "/var/run/redis", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:12,051 p=23396 u=mistral | ok: [controller-0] => (item=/var/lib/redis) => {"changed": false, "gid": 988, "group": "redis", "item": "/var/lib/redis", "mode": "0750", "owner": "redis", "path": "/var/lib/redis", "secontext": "system_u:object_r:redis_var_lib_t:s0", "size": 6, "state": "directory", "uid": 992} >2018-06-21 07:20:12,353 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers/redis) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/redis", "mode": "0755", "owner": "root", "path": "/var/log/containers/redis", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:12,664 p=23396 u=mistral | ok: [controller-0] => (item=/var/run/redis) => {"changed": false, "gid": 988, "group": "redis", "item": "/var/run/redis", "mode": "0755", "owner": "redis", "path": "/var/run/redis", "secontext": "system_u:object_r:redis_var_run_t:s0", "size": 40, "state": "directory", "uid": 992} >2018-06-21 07:20:12,688 p=23396 u=mistral | TASK [redis logs readme] ******************************************************* >2018-06-21 07:20:12,744 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:12,757 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:13,238 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "42d03af8abf93e87fdb3fc69702638fc81d943fb", "dest": "/var/log/redis/readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/redis/readme.txt", "secontext": "system_u:object_r:redis_log_t:s0", "size": 78, "state": "file", "uid": 0} >2018-06-21 07:20:13,262 p=23396 u=mistral | TASK [create /var/lib/sahara] ************************************************** >2018-06-21 07:20:13,316 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:13,328 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:13,619 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/sahara", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:13,643 p=23396 u=mistral | TASK [create persistent sahara logs directory] ********************************* >2018-06-21 07:20:13,700 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:13,713 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:13,987 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/sahara", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:14,009 p=23396 u=mistral | TASK [sahara logs readme] ****************************************************** >2018-06-21 07:20:14,064 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:14,081 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:14,610 p=23396 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "b0212a1177fa4a88502d17a1cbc31198040cf047", "msg": "Destination directory /var/log/sahara does not exist"} >2018-06-21 07:20:14,610 p=23396 u=mistral | ...ignoring >2018-06-21 07:20:14,632 p=23396 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-21 07:20:14,690 p=23396 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:14,691 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:14,706 p=23396 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:14,716 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:14,982 p=23396 u=mistral | ok: [controller-0] => (item=/srv/node) => {"changed": false, "gid": 0, "group": "root", "item": "/srv/node", "mode": "0755", "owner": "root", "path": "/srv/node", "secontext": "unconfined_u:object_r:var_t:s0", "size": 16, "state": "directory", "uid": 0} >2018-06-21 07:20:15,294 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/swift) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/swift", "mode": "0755", "owner": "root", "path": "/var/log/swift", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 24, "state": "directory", "uid": 0} >2018-06-21 07:20:15,317 p=23396 u=mistral | TASK [Create swift logging symlink] ******************************************** >2018-06-21 07:20:15,370 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:15,392 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:15,657 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "dest": "/var/log/containers/swift", "gid": 0, "group": "root", "mode": "0777", "owner": "root", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 14, "src": "/var/log/swift", "state": "link", "uid": 0} >2018-06-21 07:20:15,679 p=23396 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-21 07:20:15,738 p=23396 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:15,742 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:15,743 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:15,751 p=23396 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:15,756 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:15,761 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:16,034 p=23396 u=mistral | ok: [controller-0] => (item=/srv/node) => {"changed": false, "gid": 0, "group": "root", "item": "/srv/node", "mode": "0755", "owner": "root", "path": "/srv/node", "secontext": "unconfined_u:object_r:var_t:s0", "size": 16, "state": "directory", "uid": 0} >2018-06-21 07:20:16,335 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/swift) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/swift", "mode": "0755", "owner": "root", "path": "/var/log/swift", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 24, "state": "directory", "uid": 0} >2018-06-21 07:20:16,649 p=23396 u=mistral | ok: [controller-0] => (item=/var/log/containers) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers", "mode": "0755", "owner": "root", "path": "/var/log/containers", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 261, "state": "directory", "uid": 0} >2018-06-21 07:20:16,674 p=23396 u=mistral | TASK [Set swift_use_local_disks fact] ****************************************** >2018-06-21 07:20:16,729 p=23396 u=mistral | ok: [controller-0] => {"ansible_facts": {"swift_use_local_disks": true}, "changed": false} >2018-06-21 07:20:16,730 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:16,740 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:16,762 p=23396 u=mistral | TASK [Create Swift d1 directory if needed] ************************************* >2018-06-21 07:20:16,817 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:16,830 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:17,112 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/srv/node/d1", "secontext": "unconfined_u:object_r:var_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:17,134 p=23396 u=mistral | TASK [swift logs readme] ******************************************************* >2018-06-21 07:20:17,187 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:17,202 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:17,692 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "42510a6de124722d6efbc2b1bb038bfe97e5b6d3", "dest": "/var/log/swift/readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/swift/readme.txt", "secontext": "system_u:object_r:var_log_t:s0", "size": 116, "state": "file", "uid": 0} >2018-06-21 07:20:17,715 p=23396 u=mistral | TASK [Format SwiftRawDisks] **************************************************** >2018-06-21 07:20:17,794 p=23396 u=mistral | TASK [Mount devices defined in SwiftRawDisks] ********************************** >2018-06-21 07:20:17,878 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:20:17,903 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:17,940 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:18,234 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/ceilometer", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:18,260 p=23396 u=mistral | TASK [ceilometer logs readme] ************************************************** >2018-06-21 07:20:18,288 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:18,327 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:18,922 p=23396 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3", "msg": "Destination directory /var/log/ceilometer does not exist"} >2018-06-21 07:20:18,922 p=23396 u=mistral | ...ignoring >2018-06-21 07:20:18,945 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:20:18,975 p=23396 u=mistral | skipping: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:19,015 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:19,350 p=23396 u=mistral | ok: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:19,371 p=23396 u=mistral | TASK [neutron logs readme] ***************************************************** >2018-06-21 07:20:19,403 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:19,485 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:20,082 p=23396 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "f5a95f434a4aad25a9a81a045dec39159a6e8864", "msg": "Destination directory /var/log/neutron does not exist"} >2018-06-21 07:20:20,083 p=23396 u=mistral | ...ignoring >2018-06-21 07:20:20,105 p=23396 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-06-21 07:20:20,134 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:20,171 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:20,487 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"atime": 1529579941.736352, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "424de87cd6ae66547b285288742255731a46ab83", "ctime": 1529433183.0936344, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 5335882, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1513292517.0, "nlink": 1, "path": "/lib/systemd/system/iscsid.socket", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 175, "uid": 0, "version": "18446744072695807771", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-06-21 07:20:20,509 p=23396 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-06-21 07:20:20,538 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:20,581 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:20,885 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "enabled": false, "name": "iscsid.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "-.slice sysinit.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Backlog": "128", "Before": "sockets.target shutdown.target iscsid.service", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Open-iSCSI iscsid Socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "Documentation": "man:iscsid(8) man:iscsiadm(8)", "FragmentPath": "/usr/lib/systemd/system/iscsid.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "iscsid.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22967", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "ListenStream": "@ISCSIADM_ABSTRACT_NAMESPACE", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "iscsid.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "Requires": "sysinit.target", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "iscsid.service", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "-.slice"}} >2018-06-21 07:20:20,907 p=23396 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-21 07:20:20,936 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:20,972 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:21,281 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:21,302 p=23396 u=mistral | TASK [nova logs readme] ******************************************************** >2018-06-21 07:20:21,331 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:21,369 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:21,957 p=23396 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "c2216cc4edf5d3ce90f10748c3243db4e1842a85", "msg": "Destination directory /var/log/nova does not exist"} >2018-06-21 07:20:21,958 p=23396 u=mistral | ...ignoring >2018-06-21 07:20:21,981 p=23396 u=mistral | TASK [Mount Nova NFS Share] **************************************************** >2018-06-21 07:20:22,011 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:22,037 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:22,050 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:22,073 p=23396 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-21 07:20:22,106 p=23396 u=mistral | skipping: [controller-0] => (item=/var/lib/nova) => {"changed": false, "item": "/var/lib/nova", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:22,107 p=23396 u=mistral | skipping: [controller-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:22,153 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/lib/nova) => {"changed": false, "item": "/var/lib/nova", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:22,156 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:22,452 p=23396 u=mistral | ok: [compute-0] => (item=/var/lib/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/nova", "mode": "0755", "owner": "root", "path": "/var/lib/nova", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:22,758 p=23396 u=mistral | ok: [compute-0] => (item=/var/lib/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/libvirt", "mode": "0755", "owner": "root", "path": "/var/lib/libvirt", "secontext": "system_u:object_r:virt_var_lib_t:s0", "size": 104, "state": "directory", "uid": 0} >2018-06-21 07:20:22,781 p=23396 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-06-21 07:20:22,813 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:22,861 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,159 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:23,183 p=23396 u=mistral | TASK [is Instance HA enabled] ************************************************** >2018-06-21 07:20:23,212 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,249 p=23396 u=mistral | ok: [compute-0] => {"ansible_facts": {"instance_ha_enabled": false}, "changed": false} >2018-06-21 07:20:23,253 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,276 p=23396 u=mistral | TASK [prepare Instance HA script directory] ************************************ >2018-06-21 07:20:23,306 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,331 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,343 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,365 p=23396 u=mistral | TASK [install Instance HA script that runs nova-compute] *********************** >2018-06-21 07:20:23,398 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,423 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,434 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,456 p=23396 u=mistral | TASK [Get list of instance HA compute nodes] *********************************** >2018-06-21 07:20:23,512 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,513 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,525 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,546 p=23396 u=mistral | TASK [If instance HA is enabled on the node activate the evacuation completed check] *** >2018-06-21 07:20:23,576 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,601 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,611 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,631 p=23396 u=mistral | TASK [create libvirt persistent data directories] ****************************** >2018-06-21 07:20:23,689 p=23396 u=mistral | skipping: [controller-0] => (item=/etc/libvirt) => {"changed": false, "item": "/etc/libvirt", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,690 p=23396 u=mistral | skipping: [controller-0] => (item=/etc/libvirt/secrets) => {"changed": false, "item": "/etc/libvirt/secrets", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,691 p=23396 u=mistral | skipping: [controller-0] => (item=/etc/libvirt/qemu) => {"changed": false, "item": "/etc/libvirt/qemu", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,692 p=23396 u=mistral | skipping: [controller-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,692 p=23396 u=mistral | skipping: [controller-0] => (item=/var/log/containers/libvirt) => {"changed": false, "item": "/var/log/containers/libvirt", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,705 p=23396 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt) => {"changed": false, "item": "/etc/libvirt", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,710 p=23396 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt/secrets) => {"changed": false, "item": "/etc/libvirt/secrets", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,713 p=23396 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt/qemu) => {"changed": false, "item": "/etc/libvirt/qemu", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,719 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:23,724 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/libvirt) => {"changed": false, "item": "/var/log/containers/libvirt", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:24,031 p=23396 u=mistral | ok: [compute-0] => (item=/etc/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt", "mode": "0700", "owner": "root", "path": "/etc/libvirt", "secontext": "system_u:object_r:virt_etc_t:s0", "size": 215, "state": "directory", "uid": 0} >2018-06-21 07:20:24,327 p=23396 u=mistral | ok: [compute-0] => (item=/etc/libvirt/secrets) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt/secrets", "mode": "0700", "owner": "root", "path": "/etc/libvirt/secrets", "secontext": "system_u:object_r:virt_etc_rw_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:24,627 p=23396 u=mistral | ok: [compute-0] => (item=/etc/libvirt/qemu) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt/qemu", "mode": "0700", "owner": "root", "path": "/etc/libvirt/qemu", "secontext": "system_u:object_r:virt_etc_rw_t:s0", "size": 22, "state": "directory", "uid": 0} >2018-06-21 07:20:24,922 p=23396 u=mistral | ok: [compute-0] => (item=/var/lib/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/libvirt", "mode": "0755", "owner": "root", "path": "/var/lib/libvirt", "secontext": "system_u:object_r:virt_var_lib_t:s0", "size": 104, "state": "directory", "uid": 0} >2018-06-21 07:20:25,231 p=23396 u=mistral | ok: [compute-0] => (item=/var/log/containers/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/libvirt", "mode": "0755", "owner": "root", "path": "/var/log/containers/libvirt", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:25,255 p=23396 u=mistral | TASK [ensure qemu group is present on the host] ******************************** >2018-06-21 07:20:25,283 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:25,319 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:25,754 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "gid": 107, "name": "qemu", "state": "present", "system": false} >2018-06-21 07:20:25,775 p=23396 u=mistral | TASK [ensure qemu user is present on the host] ********************************* >2018-06-21 07:20:25,806 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:25,844 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:26,340 p=23396 u=mistral | ok: [compute-0] => {"append": false, "changed": false, "comment": "qemu user", "group": 107, "home": "/", "move_home": false, "name": "qemu", "shell": "/sbin/nologin", "state": "present", "uid": 107} >2018-06-21 07:20:26,362 p=23396 u=mistral | TASK [create directory for vhost-user sockets with qemu ownership] ************* >2018-06-21 07:20:26,390 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:26,426 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:26,710 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "gid": 107, "group": "qemu", "mode": "0755", "owner": "qemu", "path": "/var/lib/vhost_sockets", "secontext": "system_u:object_r:virt_cache_t:s0", "size": 6, "state": "directory", "uid": 107} >2018-06-21 07:20:26,735 p=23396 u=mistral | TASK [check if libvirt is installed] ******************************************* >2018-06-21 07:20:26,765 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:26,803 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:27,116 p=23396 u=mistral | [WARNING]: Consider using the yum, dnf or zypper module rather than running >rpm. If you need to use command because yum, dnf or zypper is insufficient you >can add warn=False to this command task or set command_warnings=False in >ansible.cfg to get rid of this message. > >2018-06-21 07:20:27,117 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "cmd": ["/usr/bin/rpm", "-q", "libvirt-daemon"], "delta": "0:00:00.030079", "end": "2018-06-21 07:20:27.525053", "failed_when_result": false, "rc": 0, "start": "2018-06-21 07:20:27.494974", "stderr": "", "stderr_lines": [], "stdout": "libvirt-daemon-3.9.0-14.el7_5.5.x86_64", "stdout_lines": ["libvirt-daemon-3.9.0-14.el7_5.5.x86_64"]} >2018-06-21 07:20:27,138 p=23396 u=mistral | TASK [make sure libvirt services are disabled] ********************************* >2018-06-21 07:20:27,165 p=23396 u=mistral | skipping: [controller-0] => (item=libvirtd.service) => {"changed": false, "item": "libvirtd.service", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:27,166 p=23396 u=mistral | skipping: [controller-0] => (item=virtlogd.socket) => {"changed": false, "item": "virtlogd.socket", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:27,205 p=23396 u=mistral | skipping: [ceph-0] => (item=libvirtd.service) => {"changed": false, "item": "libvirtd.service", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:27,208 p=23396 u=mistral | skipping: [ceph-0] => (item=virtlogd.socket) => {"changed": false, "item": "virtlogd.socket", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:27,510 p=23396 u=mistral | ok: [compute-0] => (item=libvirtd.service) => {"changed": false, "enabled": false, "item": "libvirtd.service", "name": "libvirtd.service", "state": "stopped", "status": {"ActiveEnterTimestamp": "Wed 2018-06-20 12:14:59 EDT", "ActiveEnterTimestampMonotonic": "34508723", "ActiveExitTimestamp": "Thu 2018-06-21 07:19:05 EDT", "ActiveExitTimestampMonotonic": "68680187330", "ActiveState": "inactive", "After": "virtlockd.socket remote-fs.target virtlogd.service network.target iscsid.service dbus.service virtlogd.socket basic.target local-fs.target virtlockd.service systemd-journald.socket system.slice apparmor.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2018-06-20 12:14:59 EDT", "AssertTimestampMonotonic": "34371616", "Before": "shutdown.target libvirt-guests.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2018-06-20 12:14:59 EDT", "ConditionTimestampMonotonic": "34371615", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Virtualization daemon", "DevicePolicy": "auto", "Documentation": "man:libvirtd(8) https://libvirt.org", "EnvironmentFile": "/etc/sysconfig/libvirtd (ignore_errors=yes)", "ExecMainCode": "1", "ExecMainExitTimestamp": "Thu 2018-06-21 07:19:05 EDT", "ExecMainExitTimestampMonotonic": "68680197301", "ExecMainPID": "1169", "ExecMainStartTimestamp": "Wed 2018-06-20 12:14:59 EDT", "ExecMainStartTimestampMonotonic": "34373183", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/libvirtd ; argv[]=/usr/sbin/libvirtd $LIBVIRTD_ARGS ; ignore_errors=no ; start_time=[Wed 2018-06-20 12:14:59 EDT] ; stop_time=[Thu 2018-06-21 07:19:05 EDT] ; pid=1169 ; code=exited ; status=0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/libvirtd.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "libvirtd.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Thu 2018-06-21 07:19:05 EDT", "InactiveEnterTimestampMonotonic": "68680197386", "InactiveExitTimestamp": "Wed 2018-06-20 12:14:59 EDT", "InactiveExitTimestampMonotonic": "34373237", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "8192", "LimitNPROC": "22967", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "libvirtd.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "virtlockd.socket basic.target virtlogd.socket", "Restart": "on-failure", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "32768", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "disabled", "WantedBy": "libvirt-guests.service", "Wants": "system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-06-21 07:20:27,817 p=23396 u=mistral | ok: [compute-0] => (item=virtlogd.socket) => {"changed": false, "enabled": false, "item": "virtlogd.socket", "name": "virtlogd.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestamp": "Wed 2018-06-20 12:14:28 EDT", "ActiveEnterTimestampMonotonic": "2928525", "ActiveExitTimestamp": "Thu 2018-06-21 07:19:05 EDT", "ActiveExitTimestampMonotonic": "68680380438", "ActiveState": "inactive", "After": "-.mount -.slice sysinit.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2018-06-20 12:14:28 EDT", "AssertTimestampMonotonic": "2927801", "Backlog": "128", "Before": "sockets.target shutdown.target virtlogd.service libvirtd.service", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2018-06-20 12:14:28 EDT", "ConditionTimestampMonotonic": "2927801", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Virtual machine log manager socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "FragmentPath": "/usr/lib/systemd/system/virtlogd.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "virtlogd.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Thu 2018-06-21 07:19:05 EDT", "InactiveEnterTimestampMonotonic": "68680380438", "InactiveExitTimestamp": "Wed 2018-06-20 12:14:28 EDT", "InactiveExitTimestampMonotonic": "2928525", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22967", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "ListenStream": "/var/run/libvirt/virtlogd-sock", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "virtlogd.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "RequiredBy": "virtlogd.service libvirtd.service", "Requires": "-.mount sysinit.target", "RequiresMountsFor": "/var/run/libvirt/virtlogd-sock", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "virtlogd.service", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "disabled", "Wants": "-.slice"}} >2018-06-21 07:20:27,844 p=23396 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-21 07:20:27,877 p=23396 u=mistral | skipping: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:27,878 p=23396 u=mistral | skipping: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:27,905 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:27,906 p=23396 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:27,921 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:27,926 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:27,947 p=23396 u=mistral | TASK [cinder logs readme] ****************************************************** >2018-06-21 07:20:27,973 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:27,999 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,009 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,029 p=23396 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-06-21 07:20:28,055 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,078 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,089 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,110 p=23396 u=mistral | TASK [cinder_enable_iscsi_backend fact] **************************************** >2018-06-21 07:20:28,137 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,161 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,172 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,193 p=23396 u=mistral | TASK [cinder create LVM volume group dd] *************************************** >2018-06-21 07:20:28,220 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,242 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,253 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,274 p=23396 u=mistral | TASK [cinder create LVM volume group] ****************************************** >2018-06-21 07:20:28,299 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,322 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,333 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,353 p=23396 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-06-21 07:20:28,379 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,404 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,416 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,437 p=23396 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-06-21 07:20:28,463 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,485 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,495 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,515 p=23396 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-21 07:20:28,546 p=23396 u=mistral | skipping: [controller-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,548 p=23396 u=mistral | skipping: [controller-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,548 p=23396 u=mistral | skipping: [controller-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,572 p=23396 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,573 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,574 p=23396 u=mistral | skipping: [compute-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,584 p=23396 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,589 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,593 p=23396 u=mistral | skipping: [ceph-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,613 p=23396 u=mistral | TASK [Set swift_use_local_disks fact] ****************************************** >2018-06-21 07:20:28,639 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,662 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,677 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,699 p=23396 u=mistral | TASK [Create Swift d1 directory if needed] ************************************* >2018-06-21 07:20:28,762 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,785 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,796 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,817 p=23396 u=mistral | TASK [Create swift logging symlink] ******************************************** >2018-06-21 07:20:28,841 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,865 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,879 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,899 p=23396 u=mistral | TASK [swift logs readme] ******************************************************* >2018-06-21 07:20:28,925 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,949 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,962 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:28,982 p=23396 u=mistral | TASK [Format SwiftRawDisks] **************************************************** >2018-06-21 07:20:29,060 p=23396 u=mistral | TASK [Mount devices defined in SwiftRawDisks] ********************************** >2018-06-21 07:20:29,121 p=23396 u=mistral | PLAY [External deployment step 1] ********************************************** >2018-06-21 07:20:29,140 p=23396 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-06-21 07:20:29,170 p=23396 u=mistral | ok: [undercloud] => {"ansible_facts": {"blacklisted_hostnames": []}, "changed": false} >2018-06-21 07:20:29,188 p=23396 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-06-21 07:20:29,393 p=23396 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/group_vars) => {"changed": true, "gid": 985, "group": "mistral", "item": "/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/group_vars", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/group_vars", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 988} >2018-06-21 07:20:29,554 p=23396 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/host_vars) => {"changed": true, "gid": 985, "group": "mistral", "item": "/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/host_vars", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/host_vars", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 988} >2018-06-21 07:20:29,720 p=23396 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir) => {"changed": true, "gid": 985, "group": "mistral", "item": "/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 988} >2018-06-21 07:20:29,738 p=23396 u=mistral | TASK [generate inventory] ****************************************************** >2018-06-21 07:20:30,331 p=23396 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "87ac4959715a33a06028c69b6c3ea4a5d7293cae", "dest": "/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/inventory.yml", "gid": 985, "group": "mistral", "md5sum": "979b46b7bc4f15cc49e1ab2540ac09dc", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 525, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529580030.02-263309722477408/source", "state": "file", "uid": 988} >2018-06-21 07:20:30,348 p=23396 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-06-21 07:20:30,384 p=23396 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_all": {"ceph_conf_overrides": {"global": {"osd_pool_default_pg_num": 32, "osd_pool_default_pgp_num": 32, "osd_pool_default_size": 1, "rgw_keystone_accepted_roles": "Member, admin", "rgw_keystone_admin_domain": "default", "rgw_keystone_admin_password": "r4vvqGIopZIGavHfqwBD5EZm2", "rgw_keystone_admin_project": "service", "rgw_keystone_admin_user": "swift", "rgw_keystone_api_version": 3, "rgw_keystone_implicit_tenants": "true", "rgw_keystone_revocation_interval": "0", "rgw_keystone_url": "http://172.17.1.17:5000", "rgw_s3_auth_use_keystone": "true"}}, "ceph_docker_image": "rhceph", "ceph_docker_image_tag": "3-6", "ceph_docker_registry": "192.168.24.1:8787", "ceph_origin": "distro", "ceph_stable": true, "cluster": "ceph", "cluster_network": "172.17.4.0/24", "containerized_deployment": true, "docker": true, "fsid": "53912472-747b-11e8-95a3-5254003d7dcb", "generate_fsid": false, "ip_version": "ipv4", "keys": [{"key": "AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow r", "name": "client.openstack", "osd_cap": "allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics"}, {"key": "AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==", "mds_cap": "allow *", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow r, allow command \\\"auth del\\\", allow command \\\"auth caps\\\", allow command \\\"auth get\\\", allow command \\\"auth get-or-create\\\"", "name": "client.manila", "osd_cap": "allow rw"}, {"key": "AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow rw", "name": "client.radosgw", "osd_cap": "allow rwx"}], "monitor_address_block": "172.17.3.0/24", "ntp_service_enabled": false, "openstack_config": true, "openstack_keys": [{"key": "AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow r", "name": "client.openstack", "osd_cap": "allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics"}, {"key": "AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==", "mds_cap": "allow *", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow r, allow command \\\"auth del\\\", allow command \\\"auth caps\\\", allow command \\\"auth get\\\", allow command \\\"auth get-or-create\\\"", "name": "client.manila", "osd_cap": "allow rw"}, {"key": "AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow rw", "name": "client.radosgw", "osd_cap": "allow rwx"}], "openstack_pools": [{"application": "rbd", "name": "images", "pg_num": 32, "rule_name": ""}, {"application": "openstack_gnocchi", "name": "metrics", "pg_num": 32, "rule_name": ""}, {"application": "rbd", "name": "backups", "pg_num": 32, "rule_name": ""}, {"application": "rbd", "name": "vms", "pg_num": 32, "rule_name": ""}, {"application": "rbd", "name": "volumes", "pg_num": 32, "rule_name": ""}], "pools": [], "public_network": "172.17.3.0/24", "user_config": true}}, "changed": false} >2018-06-21 07:20:30,405 p=23396 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-06-21 07:20:30,748 p=23396 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "2ef1c16fef5f2acadbb7d229126152ecda226303", "dest": "/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/group_vars/all.yml", "gid": 985, "group": "mistral", "md5sum": "253bfbf148fef2712fbcc2a2f29c2d8a", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 3030, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529580030.44-244034180971155/source", "state": "file", "uid": 988} >2018-06-21 07:20:30,765 p=23396 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-06-21 07:20:30,796 p=23396 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_extra_vars": {"fetch_directory": "/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir", "ireallymeanit": "yes"}}, "changed": false} >2018-06-21 07:20:30,814 p=23396 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-06-21 07:20:31,137 p=23396 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "73e3fb1775a3fe3ab317670f0de9b6c6b7ab4805", "dest": "/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/extra_vars.yml", "gid": 985, "group": "mistral", "md5sum": "cc824dc8c6fb85db45bca269599b2e14", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 115, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529580030.85-8427583327018/source", "state": "file", "uid": 988} >2018-06-21 07:20:31,156 p=23396 u=mistral | TASK [generate collect nodes uuid playbook] ************************************ >2018-06-21 07:20:31,489 p=23396 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "0ed9243967d775f1d706f954c81c53dbea91f151", "dest": "/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/nodes_uuid_playbook.yml", "gid": 985, "group": "mistral", "md5sum": "afa7e006582a1713f57c3de7724c9f39", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 157, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529580031.18-245624333131894/source", "state": "file", "uid": 988} >2018-06-21 07:20:31,506 p=23396 u=mistral | TASK [set ceph-ansible verbosity] ********************************************** >2018-06-21 07:20:31,522 p=23396 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:31,539 p=23396 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-06-21 07:20:31,556 p=23396 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:31,574 p=23396 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-06-21 07:20:31,592 p=23396 u=mistral | skipping: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": false, "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "skip_reason": "Conditional result was False"} >2018-06-21 07:20:31,610 p=23396 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-06-21 07:20:31,636 p=23396 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_mgrs": {"ceph_mgr_docker_extra_env": "-e MGR_DASHBOARD=0"}}, "changed": false} >2018-06-21 07:20:31,653 p=23396 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-06-21 07:20:32,005 p=23396 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "06d130f3471f2ac09bb0161450878cf64bafd8af", "dest": "/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/group_vars/mgrs.yml", "gid": 985, "group": "mistral", "md5sum": "0d3c03a4186ad82120a728e0470a49d9", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 46, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529580031.68-18093583450428/source", "state": "file", "uid": 988} >2018-06-21 07:20:32,026 p=23396 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-06-21 07:20:32,057 p=23396 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_mons": {"admin_secret": "AQB2NypbAAAAABAADYq0x/U/g/5X5IHsGSXANQ==", "monitor_secret": "AQB2NypbAAAAABAA67vSeiofLzzYgrjDnmeGYg=="}}, "changed": false} >2018-06-21 07:20:32,075 p=23396 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-06-21 07:20:32,422 p=23396 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "719e0f5af2a6bb3f7c520087bffa8e6653fc9cbd", "dest": "/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/group_vars/mons.yml", "gid": 985, "group": "mistral", "md5sum": "6826ff7a84879618ddc5f5704567757d", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 112, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529580032.1-163920689912545/source", "state": "file", "uid": 988} >2018-06-21 07:20:32,440 p=23396 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-06-21 07:20:32,470 p=23396 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_clients": {}}, "changed": false} >2018-06-21 07:20:32,488 p=23396 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-06-21 07:20:32,809 p=23396 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/group_vars/clients.yml", "gid": 985, "group": "mistral", "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529580032.52-24955078578559/source", "state": "file", "uid": 988} >2018-06-21 07:20:32,825 p=23396 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-06-21 07:20:32,854 p=23396 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_osds": {"devices": ["/dev/vdb"], "journal_size": 512, "osd_objectstore": "filestore", "osd_scenario": "collocated"}}, "changed": false} >2018-06-21 07:20:32,871 p=23396 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-06-21 07:20:33,205 p=23396 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "454c7fd1ab87fd8f8ec07c9874039814cbe681cf", "dest": "/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/group_vars/osds.yml", "gid": 985, "group": "mistral", "md5sum": "e03a30f138554d36c1743c14fd3d9357", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 90, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529580032.9-200374640549181/source", "state": "file", "uid": 988} >2018-06-21 07:20:33,210 p=23396 u=mistral | PLAY [Overcloud deploy step tasks for 1] *************************************** >2018-06-21 07:20:33,233 p=23396 u=mistral | TASK [include_role] ************************************************************ >2018-06-21 07:20:33,284 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:33,295 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:33,359 p=23396 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-06-21 07:20:33,852 p=23396 u=mistral | changed: [controller-0] => {"changed": true} >2018-06-21 07:20:33,874 p=23396 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-06-21 07:20:34,508 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-63.git94f4240.el7.x86_64 providing docker is already installed"]} >2018-06-21 07:20:34,533 p=23396 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-06-21 07:20:34,876 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:34,898 p=23396 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-06-21 07:20:35,400 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-06-21 07:20:35,421 p=23396 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-06-21 07:20:35,938 p=23396 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-21 07:20:35,960 p=23396 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-06-21 07:20:36,314 p=23396 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-06-21 07:20:36,335 p=23396 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-06-21 07:20:36,689 p=23396 u=mistral | changed: [controller-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:36,716 p=23396 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-06-21 07:20:37,365 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580036.76-101594677700946/source", "state": "file", "uid": 0} >2018-06-21 07:20:37,387 p=23396 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-06-21 07:20:37,738 p=23396 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-21 07:20:37,759 p=23396 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-06-21 07:20:38,109 p=23396 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-21 07:20:38,130 p=23396 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-06-21 07:20:38,484 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-06-21 07:20:38,507 p=23396 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-06-21 07:20:38,529 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:38,554 p=23396 u=mistral | TASK [container-registry : force systemd to reread configs] ******************** >2018-06-21 07:20:38,987 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "name": null, "status": {}} >2018-06-21 07:20:39,012 p=23396 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-06-21 07:20:40,744 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "rhel-push-plugin.socket registries.service system.slice docker-storage-setup.service basic.target network.target systemd-journald.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127793", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "basic.target rhel-push-plugin.socket docker-cleanup.timer registries.service", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-06-21 07:20:40,768 p=23396 u=mistral | TASK [include_role] ************************************************************ >2018-06-21 07:20:40,797 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:40,834 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:40,877 p=23396 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-06-21 07:20:41,263 p=23396 u=mistral | changed: [compute-0] => {"changed": true} >2018-06-21 07:20:41,281 p=23396 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-06-21 07:20:41,920 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-63.git94f4240.el7.x86_64 providing docker is already installed"]} >2018-06-21 07:20:41,938 p=23396 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-06-21 07:20:42,267 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:42,288 p=23396 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-06-21 07:20:42,614 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-06-21 07:20:42,631 p=23396 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-06-21 07:20:42,961 p=23396 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-21 07:20:42,979 p=23396 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-06-21 07:20:43,311 p=23396 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-06-21 07:20:43,329 p=23396 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-06-21 07:20:43,657 p=23396 u=mistral | changed: [compute-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:43,683 p=23396 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-06-21 07:20:44,276 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580043.73-62418268500926/source", "state": "file", "uid": 0} >2018-06-21 07:20:44,294 p=23396 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-06-21 07:20:44,623 p=23396 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-21 07:20:44,641 p=23396 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-06-21 07:20:44,970 p=23396 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-21 07:20:44,989 p=23396 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-06-21 07:20:45,324 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-06-21 07:20:45,343 p=23396 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-06-21 07:20:45,366 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:45,385 p=23396 u=mistral | TASK [container-registry : force systemd to reread configs] ******************** >2018-06-21 07:20:45,775 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "name": null, "status": {}} >2018-06-21 07:20:45,793 p=23396 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-06-21 07:20:47,540 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "system.slice network.target basic.target docker-storage-setup.service registries.service rhel-push-plugin.socket systemd-journald.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "basic.target docker-cleanup.timer rhel-push-plugin.socket registries.service", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-06-21 07:20:47,563 p=23396 u=mistral | TASK [include_role] ************************************************************ >2018-06-21 07:20:47,592 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:47,618 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:47,631 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:47,652 p=23396 u=mistral | TASK [include_role] ************************************************************ >2018-06-21 07:20:47,679 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:47,704 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:47,715 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:47,737 p=23396 u=mistral | TASK [include_role] ************************************************************ >2018-06-21 07:20:47,766 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:47,791 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:47,845 p=23396 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-06-21 07:20:48,155 p=23396 u=mistral | changed: [ceph-0] => {"changed": true} >2018-06-21 07:20:48,173 p=23396 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-06-21 07:20:48,733 p=23396 u=mistral | ok: [ceph-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-63.git94f4240.el7.x86_64 providing docker is already installed"]} >2018-06-21 07:20:48,755 p=23396 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-06-21 07:20:49,066 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:49,085 p=23396 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-06-21 07:20:49,397 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-06-21 07:20:49,415 p=23396 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-06-21 07:20:49,729 p=23396 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-21 07:20:49,746 p=23396 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-06-21 07:20:50,070 p=23396 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-06-21 07:20:50,087 p=23396 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-06-21 07:20:50,399 p=23396 u=mistral | changed: [ceph-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:50,426 p=23396 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-06-21 07:20:50,988 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580050.47-40349070191437/source", "state": "file", "uid": 0} >2018-06-21 07:20:51,005 p=23396 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-06-21 07:20:51,320 p=23396 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-21 07:20:51,338 p=23396 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-06-21 07:20:51,652 p=23396 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-21 07:20:51,668 p=23396 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-06-21 07:20:51,983 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-06-21 07:20:52,002 p=23396 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-06-21 07:20:52,025 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:20:52,043 p=23396 u=mistral | TASK [container-registry : force systemd to reread configs] ******************** >2018-06-21 07:20:52,417 p=23396 u=mistral | ok: [ceph-0] => {"changed": false, "name": null, "status": {}} >2018-06-21 07:20:52,436 p=23396 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-06-21 07:20:54,134 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "rhel-push-plugin.socket network.target systemd-journald.socket basic.target docker-storage-setup.service registries.service system.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "14904", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "rhel-push-plugin.socket basic.target registries.service docker-cleanup.timer", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-06-21 07:20:54,135 p=23396 u=mistral | RUNNING HANDLER [container-registry : restart docker] ************************** >2018-06-21 07:20:56,814 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Thu 2018-06-21 07:20:41 EDT", "ActiveEnterTimestampMonotonic": "68775820261", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "system.slice basic.target rhel-push-plugin.socket systemd-journald.socket network.target docker-storage-setup.service registries.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Thu 2018-06-21 07:20:39 EDT", "AssertTimestampMonotonic": "68774647964", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Thu 2018-06-21 07:20:39 EDT", "ConditionTimestampMonotonic": "68774647964", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "20196", "ExecMainStartTimestamp": "Thu 2018-06-21 07:20:39 EDT", "ExecMainStartTimestampMonotonic": "68774649162", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Thu 2018-06-21 07:20:39 EDT] ; stop_time=[n/a] ; pid=20196 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Thu 2018-06-21 07:20:39 EDT", "InactiveExitTimestampMonotonic": "68774649197", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127793", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "20196", "MemoryAccounting": "no", "MemoryCurrent": "65662976", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "registries.service basic.target docker-cleanup.timer rhel-push-plugin.socket", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "24", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestamp": "Thu 2018-06-21 07:20:41 EDT", "WatchdogTimestampMonotonic": "68775820214", "WatchdogUSec": "0"}} >2018-06-21 07:20:56,826 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Thu 2018-06-21 07:20:47 EDT", "ActiveEnterTimestampMonotonic": "68782502478", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "network.target system.slice registries.service systemd-journald.socket docker-storage-setup.service rhel-push-plugin.socket basic.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Thu 2018-06-21 07:20:46 EDT", "AssertTimestampMonotonic": "68781273752", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Thu 2018-06-21 07:20:46 EDT", "ConditionTimestampMonotonic": "68781273752", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "30227", "ExecMainStartTimestamp": "Thu 2018-06-21 07:20:46 EDT", "ExecMainStartTimestampMonotonic": "68781274943", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Thu 2018-06-21 07:20:46 EDT] ; stop_time=[n/a] ; pid=30227 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Thu 2018-06-21 07:20:46 EDT", "InactiveExitTimestampMonotonic": "68781274974", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "30227", "MemoryAccounting": "no", "MemoryCurrent": "65548288", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "docker-cleanup.timer basic.target registries.service rhel-push-plugin.socket", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "20", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestamp": "Thu 2018-06-21 07:20:47 EDT", "WatchdogTimestampMonotonic": "68782502335", "WatchdogUSec": "0"}} >2018-06-21 07:20:56,865 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Thu 2018-06-21 07:20:54 EDT", "ActiveEnterTimestampMonotonic": "68789057960", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "basic.target rhel-push-plugin.socket registries.service systemd-journald.socket network.target docker-storage-setup.service system.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Thu 2018-06-21 07:20:53 EDT", "AssertTimestampMonotonic": "68787859837", "Before": "multi-user.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Thu 2018-06-21 07:20:53 EDT", "ConditionTimestampMonotonic": "68787859837", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "27160", "ExecMainStartTimestamp": "Thu 2018-06-21 07:20:53 EDT", "ExecMainStartTimestampMonotonic": "68787860887", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Thu 2018-06-21 07:20:53 EDT] ; stop_time=[n/a] ; pid=27160 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Thu 2018-06-21 07:20:53 EDT", "InactiveExitTimestampMonotonic": "68787860916", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "14904", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "27160", "MemoryAccounting": "no", "MemoryCurrent": "60137472", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "rhel-push-plugin.socket registries.service basic.target docker-cleanup.timer", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "16", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestamp": "Thu 2018-06-21 07:20:54 EDT", "WatchdogTimestampMonotonic": "68789057817", "WatchdogUSec": "0"}} >2018-06-21 07:20:56,871 p=23396 u=mistral | PLAY [Overcloud common deploy step tasks 1] ************************************ >2018-06-21 07:20:56,899 p=23396 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-06-21 07:20:57,337 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:57,390 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:57,391 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:57,414 p=23396 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-06-21 07:20:58,086 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "149113e83b0cb4d05192576bcff7b6fc0f316bd0", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "66bedc7c4ccee7cb079b118c09f8c08c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1630, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580057.45-252350073444121/source", "state": "file", "uid": 0} >2018-06-21 07:20:58,110 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "be3cadf4421fbe374d33f269513ff6e3f1c7ab66", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "86461fb932aeaba90516617c8168d5f2", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1576, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580057.48-256178556966279/source", "state": "file", "uid": 0} >2018-06-21 07:20:58,116 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "f8a32eb42203ada5e675fbde141df7f32100af5c", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "c727dc3c35ede89e7c3d894e3fb81da4", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1588, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580057.51-16454209069422/source", "state": "file", "uid": 0} >2018-06-21 07:20:58,137 p=23396 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-06-21 07:20:58,499 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-06-21 07:20:58,532 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-06-21 07:20:58,537 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-06-21 07:20:58,558 p=23396 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-06-21 07:20:59,259 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "c8d0c143121b7904490da6698d68f76bf1957b51", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "c6d9b1246ac65ebadc18213639c2431d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 234, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580058.65-204491528653034/source", "state": "file", "uid": 0} >2018-06-21 07:20:59,262 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "c5bc7cf017025a018ebda9dd2ad6aac290a51bef", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "b53dfdbc008416d050550640e4219f21", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 13304, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580058.64-94196699619156/source", "state": "file", "uid": 0} >2018-06-21 07:20:59,270 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "09cb610f7fea36dc33be3297b42ac38af987732e", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "e806efb887de6e5795dea0490c302e84", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2288, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580058.63-120526132301333/source", "state": "file", "uid": 0} >2018-06-21 07:20:59,293 p=23396 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-06-21 07:20:59,645 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:59,679 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:59,704 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:20:59,728 p=23396 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-06-21 07:21:00,077 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-06-21 07:21:00,121 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-06-21 07:21:00,144 p=23396 u=mistral | ok: [ceph-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-06-21 07:21:00,169 p=23396 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-06-21 07:21:00,844 p=23396 u=mistral | changed: [compute-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": true, "checksum": "03f62b0a94bee17ece72ba1a3fc7577e68d9e6a4", "dest": "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh", "gid": 0, "group": "root", "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "md5sum": "1672c3fb89d576d045d5f3d5b23684c9", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580060.26-267745997959696/source", "state": "file", "uid": 0} >2018-06-21 07:21:00,864 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', 'mode': u'0700'}, 'key': 'nova_api_discover_hosts.sh'}) => {"changed": true, "checksum": "4e350e3d48cba294f2ccab34eb03c1dee23e7f82", "dest": "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh", "gid": 0, "group": "root", "item": {"key": "nova_api_discover_hosts.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}}, "md5sum": "ed5dca102b28b4f992943612dee2dced", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2318, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580060.26-228216774746629/source", "state": "file", "uid": 0} >2018-06-21 07:21:01,463 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', 'mode': u'0700'}, 'key': 'create_swift_secret.sh'}) => {"changed": true, "checksum": "e77b96beec241bb84928d298a778521376225c0d", "dest": "/var/lib/docker-config-scripts/create_swift_secret.sh", "gid": 0, "group": "root", "item": {"key": "create_swift_secret.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}}, "md5sum": "9277d70c2fd62961998c5fce0a8aeee2", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1125, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580060.89-123493306208104/source", "state": "file", "uid": 0} >2018-06-21 07:21:02,062 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': 'neutron_ovs_agent_launcher.sh'}) => {"changed": true, "checksum": "03f62b0a94bee17ece72ba1a3fc7577e68d9e6a4", "dest": "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh", "gid": 0, "group": "root", "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "md5sum": "1672c3fb89d576d045d5f3d5b23684c9", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580061.49-53788260152810/source", "state": "file", "uid": 0} >2018-06-21 07:21:02,654 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', 'mode': u'0700'}, 'key': 'set_swift_keymaster_key_id.sh'}) => {"changed": true, "checksum": "9c2474fa6e4a8869674b689206eb1a1658a28fc6", "dest": "/var/lib/docker-config-scripts/set_swift_keymaster_key_id.sh", "gid": 0, "group": "root", "item": {"key": "set_swift_keymaster_key_id.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}}, "md5sum": "054225f8957e4457ef2103ce24d44b04", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1275, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580062.09-173524462724650/source", "state": "file", "uid": 0} >2018-06-21 07:21:03,232 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', 'mode': u'0700'}, 'key': 'docker_puppet_apply.sh'}) => {"changed": true, "checksum": "93afaa6df42c9ead7768b295fa901f83ae1b39ef", "dest": "/var/lib/docker-config-scripts/docker_puppet_apply.sh", "gid": 0, "group": "root", "item": {"key": "docker_puppet_apply.sh", "value": {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}}, "md5sum": "709b2caef95cc7486f9b851414e71133", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 653, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580062.68-69068776578181/source", "state": "file", "uid": 0} >2018-06-21 07:21:03,804 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', 'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) => {"changed": true, "checksum": "0a839197c2fa15204014befb1c771a17aea5bdd1", "dest": "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh", "gid": 0, "group": "root", "item": {"key": "nova_api_ensure_default_cell.sh", "value": {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}}, "md5sum": "12a4a82656ddaae342942097b752d9db", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 442, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580063.26-130492817666926/source", "state": "file", "uid": 0} >2018-06-21 07:21:03,830 p=23396 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-06-21 07:21:03,895 p=23396 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:03,910 p=23396 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:03,911 p=23396 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:03,911 p=23396 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:03,915 p=23396 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:03,915 p=23396 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:03,915 p=23396 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:03,918 p=23396 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:03,929 p=23396 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:03,929 p=23396 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:03,932 p=23396 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:03,937 p=23396 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:03,938 p=23396 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:03,941 p=23396 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:03,944 p=23396 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:03,952 p=23396 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:03,959 p=23396 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:03,965 p=23396 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:03,988 p=23396 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-06-21 07:21:04,095 p=23396 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:04,116 p=23396 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:04,539 p=23396 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:21:04,560 p=23396 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-06-21 07:21:05,240 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "3ab40adbbc892a91d5d9de1bf5b100593fd11f83", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "e5892fd2ebc46fa64c6393f414e20ab6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 105573, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580064.6-99185855554986/source", "state": "file", "uid": 0} >2018-06-21 07:21:05,242 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "ea8622945980cce2aa6f6a0ec285f28fef454eb3", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "6a2e3c98b99c4f234941b76485bb3f0e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 11909, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580064.62-34560398097440/source", "state": "file", "uid": 0} >2018-06-21 07:21:05,261 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "ce9bc1dccca0cdcaa3098c1a790d78a8c694a5a4", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "ccd9b33a462e8e1243e2dc1f30301019", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1055, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580064.65-217253229347386/source", "state": "file", "uid": 0} >2018-06-21 07:21:05,284 p=23396 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-06-21 07:21:05,984 p=23396 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580065.36-29087860988649/source", "state": "file", "uid": 0} >2018-06-21 07:21:05,992 p=23396 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580065.38-184221551236004/source", "state": "file", "uid": 0} >2018-06-21 07:21:06,007 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'cinder_volume_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_image_tag': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_data_ownership': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], 'user': u'root', 'volumes': [u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'memcached_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log'], 'user': u'root', 'volumes': [u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'detach': False, 'privileged': False}, 'redis_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_bootstrap': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=8omuhCCcfP1YuJzPZS8tLp3AL', u'DB_ROOT_PASSWORD=zeHIZe0ICg'], 'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'haproxy_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'cinder_backup_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_bootstrap': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=n8jIt9appI3hU5NXoG3W'], 'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], 'net': u'host', 'privileged': False}, 'memcached': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'net': u'host', 'privileged': False, 'restart': u'always'}}, 'key': u'step_1'}) => {"changed": true, "checksum": "6ed04ef67fe6d8f97037e1cd69a5309ba391ac53", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "memcached_init_logs": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=8omuhCCcfP1YuJzPZS8tLp3AL", "DB_ROOT_PASSWORD=zeHIZe0ICg"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=n8jIt9appI3hU5NXoG3W"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}}, "md5sum": "04ad0163fb197eeb581f7e65b7213dab", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 7434, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580065.36-232275468553668/source", "state": "file", "uid": 0} >2018-06-21 07:21:06,589 p=23396 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_3'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580066.0-162752122721611/source", "state": "file", "uid": 0} >2018-06-21 07:21:06,595 p=23396 u=mistral | changed: [compute-0] => (item={'value': {'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'nova_libvirt': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/var/log/containers/libvirt:/var/log/libvirt', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro', u'/var/lib/vhost_sockets:/var/lib/vhost_sockets', u'/sys/fs/selinux:/sys/fs/selinux'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_virtlogd': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/etc/libvirt/qemu:/etc/libvirt/qemu:ro', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu'], 'net': u'host', 'privileged': True, 'restart': u'always'}}, 'key': u'step_3'}) => {"changed": true, "checksum": "7410b402d81937d9a195a3bf5e8207fa09cdb6e0", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}}, "md5sum": "57cce5acf78ba9c384000a575f958249", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 5050, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580065.99-47534169084308/source", "state": "file", "uid": 0} >2018-06-21 07:21:06,677 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'nova_placement': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'restart': u'always'}, 'nova_db_sync': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'heat_engine_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_copy_rings': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'detach': False, 'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, 'nova_api_ensure_default_cell': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], 'net': u'host', 'detach': False}, 'keystone_cron': {'start_order': 4, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'panko_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_backup_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'nova_api_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'keystone_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'detach': False, 'privileged': False}, 'ceilometer_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'start_order': 0, 'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'user': u'root'}, 'keystone': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'cinder_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'nova_api_map_cell0': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'glance_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'detach': False, 'privileged': False}, 'neutron_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'sahara_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'detach': False, 'privileged': False}, 'keystone_bootstrap': {'action': u'exec', 'start_order': 3, 'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'6CLNy5Ewot5UhcBYmt27oGDMD'], 'user': u'root'}, 'horizon': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_setup_srv': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'command': [u'chown', u'-R', u'swift:', u'/srv/node'], 'user': u'root', 'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) => {"changed": true, "checksum": "16f70a31b7af2c706e6f92cce58994006ac0aab9", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "6CLNy5Ewot5UhcBYmt27oGDMD"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "user": "root", "volumes": ["/srv/node:/srv/node"]}}}, "md5sum": "96751e80b3a4c2d2ff5e757c69bbd0f1", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 21820, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580066.01-266631492309289/source", "state": "file", "uid": 0} >2018-06-21 07:21:07,194 p=23396 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580066.6-238819047491591/source", "state": "file", "uid": 0} >2018-06-21 07:21:07,209 p=23396 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580066.6-215148643836902/source", "state": "file", "uid": 0} >2018-06-21 07:21:07,343 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'gnocchi_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], 'user': u'root', 'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, 'mysql_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529579258'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], 'net': u'host', 'detach': False}, 'gnocchi_init_lib': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], 'user': u'root', 'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, 'cinder_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'user': u'root'}, 'create_dnsmasq_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'panko_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], 'user': u'root', 'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, 'redis_init_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529579258'], 'config_volume': u'redis_init_bundle', 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_scheduler_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'user': u'root'}, 'glance_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], 'privileged': False, 'volumes': [u'/var/log/containers/glance:/var/log/glance'], 'user': u'root'}, 'clustercheck': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'restart': u'always'}, 'haproxy_init_bundle': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529579258'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False, 'privileged': True}, 'neutron_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], 'privileged': False, 'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], 'user': u'root'}, 'mysql_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'config_volume': u'mysql', 'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'rabbitmq_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529579258'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], 'net': u'host', 'detach': False}, 'nova_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], 'user': u'root'}, 'haproxy_restart_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'config_volume': u'haproxy', 'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'create_keepalived_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'rabbitmq_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'config_volume': u'rabbitmq', 'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'horizon_fix_perms': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'user': u'root', 'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'aodh_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], 'user': u'root', 'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, 'nova_metadata_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova'], 'user': u'root'}, 'redis_restart_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'config_volume': u'redis', 'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'heat_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], 'user': u'root', 'volumes': [u'/var/log/containers/heat:/var/log/heat']}, 'nova_placement_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'start_order': 1, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], 'user': u'root'}, 'keystone_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], 'start_order': 1, 'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], 'user': u'root'}}, 'key': u'step_2'}) => {"changed": true, "checksum": "f2783b07534ac45e343e1a0a0ef6f22da7528678", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529579258"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529579258"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529579258"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529579258"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}}, "md5sum": "1fb82d472e0de01bf5e74ed2464bcd52", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 17318, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580066.67-242494583016318/source", "state": "file", "uid": 0} >2018-06-21 07:21:07,797 p=23396 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580067.2-205660306433143/source", "state": "file", "uid": 0} >2018-06-21 07:21:07,834 p=23396 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580067.22-262396769718942/source", "state": "file", "uid": 0} >2018-06-21 07:21:07,999 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'cinder_volume_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529579258'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_volume_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_statsd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_backup', u'if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo "openstack-cinder-backup restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_metricd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_discover_hosts': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529579258'], 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], 'net': u'host', 'detach': False}, 'ceilometer_gnocchi_upgrade': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'detach': False, 'privileged': False}, 'gnocchi_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529579258'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}}, 'key': u'step_5'}) => {"changed": true, "checksum": "214f9a2b4297021fe86afded7118e8c45dba83dd", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529579258"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529579258"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529579258"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}}}, "md5sum": "f26e95dd2ff19f40299549fb7716f30a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 10552, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580067.35-104591690641791/source", "state": "file", "uid": 0} >2018-06-21 07:21:08,449 p=23396 u=mistral | changed: [ceph-0] => (item={'value': {'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "8acd94aee3f5b5403e8fb7f16593594f245dafee", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}}, "md5sum": "2aaa44b365bea28e18d96f2f17bef412", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 973, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580067.81-22185915444139/source", "state": "file", "uid": 0} >2018-06-21 07:21:08,485 p=23396 u=mistral | changed: [compute-0] => (item={'value': {'ceilometer_agent_compute': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/run/libvirt:/var/run/libvirt:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_libvirt_init_secret': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '53912472-747b-11e8-95a3-5254003d7dcb' --base64 'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA=='"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt'], 'detach': False, 'privileged': False}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_migration_target': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ssh/:/host-ssh/:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_compute': {'ipc': u'host', 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/dev:/dev', u'/lib/modules:/lib/modules:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/libvirt:/var/lib/libvirt', u'/sys/class/net:/sys/class/net', u'/sys/bus/pci:/sys/bus/pci'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "0d417e60cd9c4b580b8889ca2b34ab7a7cd1c84e", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '53912472-747b-11e8-95a3-5254003d7dcb' --base64 'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}}, "md5sum": "43f4c7750111fb2e9d00b850149a8ce7", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6779, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580067.84-115523017480986/source", "state": "file", "uid": 0} >2018-06-21 07:21:08,662 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'swift_container_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_evaluator': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'cinder_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_proxy': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'restart': u'always'}, 'neutron_dhcp': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_metadata_agent': {'start_order': 10, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'ceilometer_agent_central': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'keystone_refresh': {'action': u'exec', 'start_order': 1, 'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], 'user': u'root'}, 'swift_account_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_notifier': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_consoleauth': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'gnocchi_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_account_reaper': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'ceilometer_agent_notification': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_vnc_proxy': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_rsync': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'aodh_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_metadata': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_l3_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_conductor': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'heat_api_cfn': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_api': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_container_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_expirer': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'heat_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'panko_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_listener': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_api': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'glance_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "a1be6aa2d4cc45e104b7c75319745196e636d5d2", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}}, "md5sum": "1f138d32563935823e0ae333e7382fb3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 48375, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580068.01-34394604890934/source", "state": "file", "uid": 0} >2018-06-21 07:21:09,083 p=23396 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580068.45-46370894397110/source", "state": "file", "uid": 0} >2018-06-21 07:21:09,140 p=23396 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580068.49-222161997301439/source", "state": "file", "uid": 0} >2018-06-21 07:21:09,288 p=23396 u=mistral | changed: [controller-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580068.64-72934088140279/source", "state": "file", "uid": 0} >2018-06-21 07:21:09,412 p=23396 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-06-21 07:21:09,801 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:21:09,809 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:21:09,828 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-21 07:21:09,851 p=23396 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-06-21 07:21:10,560 p=23396 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580069.95-28015540436995/source", "state": "file", "uid": 0} >2018-06-21 07:21:10,567 p=23396 u=mistral | changed: [ceph-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580069.95-252549096222492/source", "state": "file", "uid": 0} >2018-06-21 07:21:10,712 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580070.07-10497200413858/source", "state": "file", "uid": 0} >2018-06-21 07:21:11,156 p=23396 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": true, "checksum": "40f9ceb4dd2fc8e9c51bf5152a0fa8e1d16d9137", "dest": "/var/lib/kolla/config_files/iscsid.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "md5sum": "9cd3c2dc0153b127d70141dadfabd12c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 175, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580070.57-64864968057570/source", "state": "file", "uid": 0} >2018-06-21 07:21:11,352 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/keystone.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/keystone.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/keystone.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580070.72-105814517733460/source", "state": "file", "uid": 0} >2018-06-21 07:21:11,772 p=23396 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/libvirtd', 'permissions': [{'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_libvirt.json'}) => {"changed": true, "checksum": "b50cbe1f8b020aa49249248b57310f45005813b3", "dest": "/var/lib/kolla/config_files/nova_libvirt.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_libvirt.json", "value": {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "8356787bbcfcb5674a0bf2570719654a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 512, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580071.17-40815567288276/source", "state": "file", "uid": 0} >2018-06-21 07:21:11,999 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/lib/cinder', 'recurse': True}, {'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_backup.json'}) => {"changed": true, "checksum": "0e697e31bdc439b99552bac9ffe0bab07f2af4a4", "dest": "/var/lib/kolla/config_files/cinder_backup.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_backup.json", "value": {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "8e107eb8f6989be8375a0ff2dd5b4d57", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580071.36-99756123431244/source", "state": "file", "uid": 0} >2018-06-21 07:21:12,373 p=23396 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ssh/', 'owner': u'root', 'perm': u'0600', 'source': u'/host-ssh/ssh_host_*_key'}], 'command': u'/usr/sbin/sshd -D -p 2022'}, 'key': '/var/lib/kolla/config_files/nova-migration-target.json'}) => {"changed": true, "checksum": "6a0a936a324363cd605e22c2327c17deb6dfbec2", "dest": "/var/lib/kolla/config_files/nova-migration-target.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova-migration-target.json", "value": {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}}, "md5sum": "161558d57b182ca70c6f9bbd7fcbda8a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 258, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580071.78-150788354211133/source", "state": "file", "uid": 0} >2018-06-21 07:21:12,624 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580072.01-73101793425134/source", "state": "file", "uid": 0} >2018-06-21 07:21:12,979 p=23396 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf'}, 'key': '/var/lib/kolla/config_files/nova_virtlogd.json'}) => {"changed": true, "checksum": "8bbfe195e54ddfe481aaad9744174f7344d49681", "dest": "/var/lib/kolla/config_files/nova_virtlogd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_virtlogd.json", "value": {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "786b962e2df778e3ce02b185ef93deac", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 193, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580072.38-22644552248185/source", "state": "file", "uid": 0} >2018-06-21 07:21:13,247 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_auditor.json'}) => {"changed": true, "checksum": "413730fbf3f7935085cfda60cbc1535d8bce0caf", "dest": "/var/lib/kolla/config_files/swift_account_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_auditor.json", "value": {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "dfccd947a56ceb6fa2b71c400281a365", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 200, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580072.63-199848436878758/source", "state": "file", "uid": 0} >2018-06-21 07:21:13,576 p=23396 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": true, "checksum": "bd1c4f0459f65e7f67a969a89c74a8b8cdcfd9f8", "dest": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "3599cf6b814b7c628c2887996ca46138", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 261, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580072.99-236207169036478/source", "state": "file", "uid": 0} >2018-06-21 07:21:13,875 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_replicator.json'}) => {"changed": true, "checksum": "2bf5ca66cb377c9fa3e6880f8b078d1312470cde", "dest": "/var/lib/kolla/config_files/swift_account_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_replicator.json", "value": {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d4a857b7e18f40f1cc1e6fd265c89770", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 203, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580073.25-82459535183646/source", "state": "file", "uid": 0} >2018-06-21 07:21:14,171 p=23396 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/nova-compute ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/var/lib/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_compute.json'}) => {"changed": true, "checksum": "bb1c3bcd199b74791ea32746c08f4925a3b585a2", "dest": "/var/lib/kolla/config_files/nova_compute.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_compute.json", "value": {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/var/lib/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "70b809037933259f45bb1585e9e6a4cc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 643, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580073.59-87563439230054/source", "state": "file", "uid": 0} >2018-06-21 07:21:14,530 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-notifier', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_notifier.json'}) => {"changed": true, "checksum": "e01d19d7f7cff24dfcc0d132b7d8ceabba199142", "dest": "/var/lib/kolla/config_files/aodh_notifier.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_notifier.json", "value": {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "5d4a748030a9a7476ccbd8902fb654fc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 244, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580073.88-44857180815594/source", "state": "file", "uid": 0} >2018-06-21 07:21:14,804 p=23396 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log'}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_compute.json'}) => {"changed": true, "checksum": "4b3e97fcd87fd70b35934d1ef908747f302a4d11", "dest": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d91832a36a0ad3616a4e78c1af7d0db5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 237, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580074.18-251358962781274/source", "state": "file", "uid": 0} >2018-06-21 07:21:15,141 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-scheduler ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_scheduler.json'}) => {"changed": true, "checksum": "23416bae23a2c08d2c534f76d19f8c4bad40ee92", "dest": "/var/lib/kolla/config_files/nova_scheduler.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_scheduler.json", "value": {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "d00e4198d95dede3f0b6ac351d57a982", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580074.54-103437669783417/source", "state": "file", "uid": 0} >2018-06-21 07:21:15,737 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cron.json'}) => {"changed": true, "checksum": "a13a92b47f931e2e89d7e4bf5057a4307ab9cd45", "dest": "/var/lib/kolla/config_files/heat_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "e671c4783cc86fb2ad300fcd11b2f99b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 240, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580075.15-114545330403835/source", "state": "file", "uid": 0} >2018-06-21 07:21:16,339 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/certs/neutron.crt'}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': '/var/lib/kolla/config_files/neutron_dhcp.json'}) => {"changed": true, "checksum": "da289f102f641cdd0a02df41c443d7d8387741a5", "dest": "/var/lib/kolla/config_files/neutron_dhcp.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_dhcp.json", "value": {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}}, "md5sum": "c5975567082648a9da814c433c49f2d6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 875, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580075.75-46347023438193/source", "state": "file", "uid": 0} >2018-06-21 07:21:16,941 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', 'permissions': [{'owner': u'haproxy:haproxy', 'path': u'/var/lib/haproxy', 'recurse': True}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/certs/haproxy/*', 'optional': True, 'perm': u'0600'}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/private/haproxy/*', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/haproxy.json'}) => {"changed": true, "checksum": "0801385cb9292b3b6eb8440166435242bd90e288", "dest": "/var/lib/kolla/config_files/haproxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/haproxy.json", "value": {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}}, "md5sum": "a2742f7abd50bb0af0a4ba55b2f1f4ff", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 648, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580076.35-71063287555119/source", "state": "file", "uid": 0} >2018-06-21 07:21:17,527 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api_cron.json'}) => {"changed": true, "checksum": "c1a1552a71f4daefebff5234f9d8ba71f4c64d76", "dest": "/var/lib/kolla/config_files/nova_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "6b8ef057a2e5539eacd9f29fc4b94036", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 240, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580076.95-159648483621390/source", "state": "file", "uid": 0} >2018-06-21 07:21:18,117 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_db_sync.json'}) => {"changed": true, "checksum": "a6d2eb62af2f11437c704d13adf72d498324ce2a", "dest": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "value": {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "d586f0c2ff043bece10efff986d635a3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 531, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580077.54-17100755120996/source", "state": "file", "uid": 0} >2018-06-21 07:21:18,701 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_reaper.json'}) => {"changed": true, "checksum": "b061cf7478060add5d079aafaeae81b445251a8f", "dest": "/var/lib/kolla/config_files/swift_account_reaper.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_reaper.json", "value": {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "0f3bbe74ca95c8cca321ee32e2aff7d1", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580078.12-170399818368756/source", "state": "file", "uid": 0} >2018-06-21 07:21:19,280 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-engine.json'}) => {"changed": true, "checksum": "b7397fff831b47db0b6111663d816a64a389cb25", "dest": "/var/lib/kolla/config_files/sahara-engine.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/sahara-engine.json", "value": {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "md5sum": "ac2c7a84fc46a1f1d128201ce5b67c2d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 360, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580078.71-6605411654636/source", "state": "file", "uid": 0} >2018-06-21 07:21:19,824 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'redis:redis', 'path': u'/var/run/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/lib/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/log/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/certs/redis.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/private/redis.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/redis.json'}) => {"changed": true, "checksum": "66d6d6bd51aaa0c100cdfc7688267a4342c7859f", "dest": "/var/lib/kolla/config_files/redis.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/redis.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}}, "md5sum": "ceafff1d742633f8759bdb1af0e3ebd4", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 843, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580079.29-74626922175932/source", "state": "file", "uid": 0} >2018-06-21 07:21:20,394 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_vnc_proxy.json'}) => {"changed": true, "checksum": "b64555136537c36af22340fb15f21f0e01ac3495", "dest": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "value": {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "557a4e9522f54cfbd6456516e67f4971", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 271, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580079.83-155827279628486/source", "state": "file", "uid": 0} >2018-06-21 07:21:20,969 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', 'permissions': [{'owner': u'glance:glance', 'path': u'/var/lib/glance', 'recurse': True}, {'owner': u'glance:glance', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/glance_api.json'}) => {"changed": true, "checksum": "2a93405ac579e31c6e5732983f3d7dd8bed55b33", "dest": "/var/lib/kolla/config_files/glance_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/glance_api.json", "value": {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "30c5fe40dffc304e7edeab4019e96e92", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 556, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580080.4-145817786640622/source", "state": "file", "uid": 0} >2018-06-21 07:21:21,554 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_auditor.json'}) => {"changed": true, "checksum": "739f6562d3ea24561c6d8bcf37041a9eac928257", "dest": "/var/lib/kolla/config_files/swift_container_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_auditor.json", "value": {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "b63816c7c08aef58249d13b65b387da6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 204, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580080.98-207300453944913/source", "state": "file", "uid": 0} >2018-06-21 07:21:22,147 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-panko/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', 'permissions': [{'owner': u'root:ceilometer', 'path': u'/etc/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) => {"changed": true, "checksum": "98adef088b2ae2648ac88b812890957ec54eff13", "dest": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "value": {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}}, "md5sum": "4a38c9578181c292891f5f7bdb9f791b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 428, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580081.56-7768564467081/source", "state": "file", "uid": 0} >2018-06-21 07:21:22,723 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_expirer.json'}) => {"changed": true, "checksum": "ebbb7ee6895cea2b9278f33e888881d3d3f1a68a", "dest": "/var/lib/kolla/config_files/swift_object_expirer.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_expirer.json", "value": {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "e4bf891d8ffc9a015be201a6ef0d5abc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580082.16-174923990188576/source", "state": "file", "uid": 0} >2018-06-21 07:21:23,317 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_central.json'}) => {"changed": true, "checksum": "53d52f7d52f0fb3da33de2c20414eb3248593fdd", "dest": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "2863f917d7ada51e9570fb53bb363eed", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 237, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580082.73-24660008304612/source", "state": "file", "uid": 0} >2018-06-21 07:21:23,888 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api.json'}) => {"changed": true, "checksum": "454582321236a137f78205f328bae190c02f06b0", "dest": "/var/lib/kolla/config_files/heat_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "c04ac0476ee6639fadf252b0e9d9649b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580083.33-18866670523456/source", "state": "file", "uid": 0} >2018-06-21 07:21:24,464 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': '/var/lib/kolla/config_files/swift_rsync.json'}) => {"changed": true, "checksum": "44a8f1a58092190d553d3f589cab9ae566f8dc81", "dest": "/var/lib/kolla/config_files/swift_rsync.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_rsync.json", "value": {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "886febadf691905adf0c129f3aa0197a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 200, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580083.9-80060470223358/source", "state": "file", "uid": 0} >2018-06-21 07:21:25,035 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_server.json'}) => {"changed": true, "checksum": "279b64a7d6914d2a03c86c703f53e3d71b1daef1", "dest": "/var/lib/kolla/config_files/swift_account_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_server.json", "value": {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "b41d67c146c800142c5405fe5a0b332e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580084.47-272681979526697/source", "state": "file", "uid": 0} >2018-06-21 07:21:25,616 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api_cron.json'}) => {"changed": true, "checksum": "06055a69fec2bc513b4c86ceb654a5fc29bd0866", "dest": "/var/lib/kolla/config_files/cinder_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "801aba1299d99bfd7e63f66ca7a4ba40", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580085.04-24001972326858/source", "state": "file", "uid": 0} >2018-06-21 07:21:26,193 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_proxy.json'}) => {"changed": true, "checksum": "a0874b803c5238a4eeb12b1265d5d1db93c0d3d4", "dest": "/var/lib/kolla/config_files/swift_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_proxy.json", "value": {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "a38e4e3ae519b3b0824e19184e521b36", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 195, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580085.62-257450161235592/source", "state": "file", "uid": 0} >2018-06-21 07:21:26,763 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_updater.json'}) => {"changed": true, "checksum": "8dbfc3669a6d79fb30702be502ced7501500480a", "dest": "/var/lib/kolla/config_files/swift_container_updater.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_updater.json", "value": {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "a697319d04392dc572dff6236144571f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 204, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580086.2-221752004929482/source", "state": "file", "uid": 0} >2018-06-21 07:21:27,329 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/xinetd -dontfork'}, 'key': '/var/lib/kolla/config_files/clustercheck.json'}) => {"changed": true, "checksum": "3c87335a28b992f90769aea9ea62fb610f8236f1", "dest": "/var/lib/kolla/config_files/clustercheck.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/clustercheck.json", "value": {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d74434e7b8bcaca0b227152346c13db8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 165, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580086.77-114102686885870/source", "state": "file", "uid": 0} >2018-06-21 07:21:27,921 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'mysql:mysql', 'path': u'/var/log/mysql', 'recurse': True}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/certs/mysql.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/private/mysql.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/mysql.json'}) => {"changed": true, "checksum": "b52f0d28ed1ac134c64994c08b3f2378e8dff494", "dest": "/var/lib/kolla/config_files/mysql.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/mysql.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}}, "md5sum": "4d15ed291dbe96e88b9a128b0e5c99e9", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 687, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580087.34-268608214697672/source", "state": "file", "uid": 0} >2018-06-21 07:21:28,504 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_placement.json'}) => {"changed": true, "checksum": "d061b71e9106733354c297cbb7b327a22e476de5", "dest": "/var/lib/kolla/config_files/nova_placement.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_placement.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "941db485b7079f2f0e008e1bdff8e45f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580087.93-23212865029663/source", "state": "file", "uid": 0} >2018-06-21 07:21:29,038 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-api.json'}) => {"changed": true, "checksum": "fd070eb1bdc97442fddc24f503fe5e3251b89e28", "dest": "/var/lib/kolla/config_files/sahara-api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/sahara-api.json", "value": {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "md5sum": "bd52668d37c227cc00c418bbe889ab90", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 357, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580088.51-202178912071611/source", "state": "file", "uid": 0} >2018-06-21 07:21:29,615 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_api.json'}) => {"changed": true, "checksum": "f4177197cb07127689ae10a60020efa3a5e0d457", "dest": "/var/lib/kolla/config_files/aodh_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "582326e52a94260e71a4a19dc4d75191", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580089.05-266538395465906/source", "state": "file", "uid": 0} >2018-06-21 07:21:30,204 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'keystone:keystone', 'path': u'/var/log/keystone', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/keystone_cron.json'}) => {"changed": true, "checksum": "815ba71e0584cb12e7d40f794603c6bfb1800626", "dest": "/var/lib/kolla/config_files/keystone_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/keystone_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}}, "md5sum": "b3b3bbd6499e09c424665311a5e66136", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 252, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580089.62-105117610988570/source", "state": "file", "uid": 0} >2018-06-21 07:21:30,794 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580090.21-111696101355130/source", "state": "file", "uid": 0} >2018-06-21 07:21:31,364 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_replicator.json'}) => {"changed": true, "checksum": "659d25615392d81b2f6bc001067232495de4d6ac", "dest": "/var/lib/kolla/config_files/swift_object_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_replicator.json", "value": {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "cdea8a372a87263d5fc44b482867a705", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 201, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580090.8-268488318607580/source", "state": "file", "uid": 0} >2018-06-21 07:21:31,946 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-conductor ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_conductor.json'}) => {"changed": true, "checksum": "01a54792c74d0ebd057e8d0f44e6e8e619283e62", "dest": "/var/lib/kolla/config_files/nova_conductor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_conductor.json", "value": {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "ccbba0ad7a926ceca2bf858b8a9cc376", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580091.37-117710031970650/source", "state": "file", "uid": 0} >2018-06-21 07:21:32,536 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cfn.json'}) => {"changed": true, "checksum": "454582321236a137f78205f328bae190c02f06b0", "dest": "/var/lib/kolla/config_files/heat_api_cfn.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api_cfn.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "c04ac0476ee6639fadf252b0e9d9649b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580091.95-199243827757926/source", "state": "file", "uid": 0} >2018-06-21 07:21:33,115 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-api-metadata ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_metadata.json'}) => {"changed": true, "checksum": "edb529183cc509ea82818edf4d88e3650b5ffc57", "dest": "/var/lib/kolla/config_files/nova_metadata.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_metadata.json", "value": {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "45129bd8b5b9aef067edb558a9fb2c68", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 249, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580092.54-91427540884574/source", "state": "file", "uid": 0} >2018-06-21 07:21:33,700 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": true, "checksum": "bd1c4f0459f65e7f67a969a89c74a8b8cdcfd9f8", "dest": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "3599cf6b814b7c628c2887996ca46138", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 261, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580093.12-53601318201587/source", "state": "file", "uid": 0} >2018-06-21 07:21:34,316 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'rabbitmq:rabbitmq', 'path': u'/var/lib/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/var/log/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/certs/rabbitmq.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/private/rabbitmq.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/rabbitmq.json'}) => {"changed": true, "checksum": "205ddacf194881a04c54779e3049b3c59ef6c4af", "dest": "/var/lib/kolla/config_files/rabbitmq.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/rabbitmq.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}}, "md5sum": "1097dade2a2355fd51207668004d093d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 792, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580093.71-210002770499931/source", "state": "file", "uid": 0} >2018-06-21 07:21:34,912 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-consoleauth ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_consoleauth.json'}) => {"changed": true, "checksum": "a960878859377dfae6334d9b7eaa9f554ab31798", "dest": "/var/lib/kolla/config_files/nova_consoleauth.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_consoleauth.json", "value": {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "2a66fc646aae3e5913e0598ccef3881f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 248, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580094.32-208264588830695/source", "state": "file", "uid": 0} >2018-06-21 07:21:35,496 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_updater.json'}) => {"changed": true, "checksum": "4f7a34f38afe301f885e25eb10225c461ab1d0b1", "dest": "/var/lib/kolla/config_files/swift_object_updater.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_updater.json", "value": {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "71a7e788486d505cfec645da0ac337cd", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580094.92-73974086989747/source", "state": "file", "uid": 0} >2018-06-21 07:21:36,092 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_api.json'}) => {"changed": true, "checksum": "5a73d3b7ef652341120c9298683d3a26f3fb668b", "dest": "/var/lib/kolla/config_files/neutron_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_api.json", "value": {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "c48346aa3f8c096826ebab378db9dfb9", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 549, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580095.51-125283753892931/source", "state": "file", "uid": 0} >2018-06-21 07:21:36,688 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_scheduler.json'}) => {"changed": true, "checksum": "9ec49193a63036ecf32a1479eabdac05dcab06e0", "dest": "/var/lib/kolla/config_files/cinder_scheduler.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_scheduler.json", "value": {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "93e9da0d08550be0ed30576cefdfbfbb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 340, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580096.1-149493582001381/source", "state": "file", "uid": 0} >2018-06-21 07:21:37,286 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-metricd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_metricd.json'}) => {"changed": true, "checksum": "c8763a8c16702042afe553b54212340d800e1509", "dest": "/var/lib/kolla/config_files/gnocchi_metricd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_metricd.json", "value": {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "db9bd25aa2fcd2845d442869e986e7d8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 471, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580096.7-266665724608252/source", "state": "file", "uid": 0} >2018-06-21 07:21:37,879 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_metadata_agent.json'}) => {"changed": true, "checksum": "fe01b9d48d08f239bbf9acf7e2a1492397180c8e", "dest": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "value": {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "md5sum": "a26f6acfc823d6e2e5b34367b859c8fa", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 617, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580097.29-186840870794626/source", "state": "file", "uid": 0} >2018-06-21 07:21:38,461 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_replicator.json'}) => {"changed": true, "checksum": "a418eddca731078cfd8fe2fda7ee64d9ffaf7dda", "dest": "/var/lib/kolla/config_files/swift_container_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_replicator.json", "value": {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "930bbe0f8c13b55f664fb3a89dfa1613", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 207, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580097.89-117974933296730/source", "state": "file", "uid": 0} >2018-06-21 07:21:39,064 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_engine.json'}) => {"changed": true, "checksum": "fe3989178a2ea434bae6dfd64b04423e3ea005bc", "dest": "/var/lib/kolla/config_files/heat_engine.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_engine.json", "value": {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "aee05ebc54399dde3dfc3577c3431a92", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 322, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580098.47-161785387142588/source", "state": "file", "uid": 0} >2018-06-21 07:21:39,646 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api.json'}) => {"changed": true, "checksum": "d061b71e9106733354c297cbb7b327a22e476de5", "dest": "/var/lib/kolla/config_files/nova_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "941db485b7079f2f0e008e1bdff8e45f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580099.07-157714052714921/source", "state": "file", "uid": 0} >2018-06-21 07:21:40,236 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', 'permissions': [{'owner': u'swift:swift', 'path': u'/var/cache/swift', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/swift_object_server.json'}) => {"changed": true, "checksum": "460cdcfbcfac45a30b03df89ac84d2f34db64d72", "dest": "/var/lib/kolla/config_files/swift_object_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_server.json", "value": {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}}, "md5sum": "b00c233fd2cd32c68e429e42918b8245", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 285, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580099.65-149783357769010/source", "state": "file", "uid": 0} >2018-06-21 07:21:40,821 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'stunnel /etc/stunnel/stunnel.conf'}, 'key': '/var/lib/kolla/config_files/redis_tls_proxy.json'}) => {"changed": true, "checksum": "80800f9f267aaf3497499af70b7945e3b6ae771b", "dest": "/var/lib/kolla/config_files/redis_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/redis_tls_proxy.json", "value": {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "c45d2764863cc585b994d432412ff9e8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 172, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580100.24-251032478252172/source", "state": "file", "uid": 0} >2018-06-21 07:21:41,413 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_api.json'}) => {"changed": true, "checksum": "39f33531116fbcba7a5d9c1cbbc32f4af5e6b981", "dest": "/var/lib/kolla/config_files/gnocchi_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "5e924ffe736d942bf904a791bf5b5af2", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 475, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580100.83-95531693396475/source", "state": "file", "uid": 0} >2018-06-21 07:21:42,005 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api.json'}) => {"changed": true, "checksum": "7f36445e4c6eb403ce919ca3adee771d4cb3bcce", "dest": "/var/lib/kolla/config_files/cinder_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "bb3e2e5741eb3e5b6c53da835e66d00d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 256, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580101.42-166401319916444/source", "state": "file", "uid": 0} >2018-06-21 07:21:42,592 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_volume.json'}) => {"changed": true, "checksum": "e800a0e1c86f8fa7a41efbf24ce38f48a458ba51", "dest": "/var/lib/kolla/config_files/cinder_volume.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_volume.json", "value": {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "a85ec43ba623807ac022c04663fa68f5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 579, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580102.01-205522844726031/source", "state": "file", "uid": 0} >2018-06-21 07:21:43,176 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'panko:panko', 'path': u'/var/log/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/panko_api.json'}) => {"changed": true, "checksum": "2db8f01174b9c2aa3a180add472b54891aed5cd6", "dest": "/var/lib/kolla/config_files/panko_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/panko_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}}, "md5sum": "7d9530934c938a4c96f71797957f7ca8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 253, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580102.6-145106791396570/source", "state": "file", "uid": 0} >2018-06-21 07:21:43,763 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_auditor.json'}) => {"changed": true, "checksum": "fbcdad9219733b81ad969426553906c1a8648897", "dest": "/var/lib/kolla/config_files/swift_object_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_auditor.json", "value": {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "45f7348541b64a76aec07477ea1d7358", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580103.18-253164701705627/source", "state": "file", "uid": 0} >2018-06-21 07:21:44,364 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_l3_agent.json'}) => {"changed": true, "checksum": "cd233477dc9defd8028ac1a8fe736b8c9fcde9f8", "dest": "/var/lib/kolla/config_files/neutron_l3_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_l3_agent.json", "value": {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "md5sum": "b47a8dc2601f0e1c404b9009d1c99c32", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 634, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580103.77-234717377866990/source", "state": "file", "uid": 0} >2018-06-21 07:21:44,966 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-listener', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_listener.json'}) => {"changed": true, "checksum": "a7135286aba5eb111dc77c913fc1f7dc0977e783", "dest": "/var/lib/kolla/config_files/aodh_listener.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_listener.json", "value": {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "ff2b7ae2bb8061a36a8223f5c34a970b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 244, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580104.37-73403854890447/source", "state": "file", "uid": 0} >2018-06-21 07:21:45,555 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_server.json'}) => {"changed": true, "checksum": "1f5cc060becbca7be3515f39537993b91e109a6d", "dest": "/var/lib/kolla/config_files/swift_container_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_server.json", "value": {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "59a9944c2c3c07fec0293d2efd7d8082", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 203, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580104.97-94612626386148/source", "state": "file", "uid": 0} >2018-06-21 07:21:46,160 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-evaluator', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_evaluator.json'}) => {"changed": true, "checksum": "596ee1b7f45471d04a0bc3d985f82ad722631b98", "dest": "/var/lib/kolla/config_files/aodh_evaluator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_evaluator.json", "value": {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "94c5432632bf2acca69de0063414183b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 245, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580105.56-20983210664446/source", "state": "file", "uid": 0} >2018-06-21 07:21:46,775 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580106.17-52661983158482/source", "state": "file", "uid": 0} >2018-06-21 07:21:47,380 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": true, "checksum": "40f9ceb4dd2fc8e9c51bf5152a0fa8e1d16d9137", "dest": "/var/lib/kolla/config_files/iscsid.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "md5sum": "9cd3c2dc0153b127d70141dadfabd12c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 175, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580106.77-266322657370373/source", "state": "file", "uid": 0} >2018-06-21 07:21:47,993 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-statsd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_statsd.json'}) => {"changed": true, "checksum": "1a38774f0fed561a8f1ad8c7f0a976a71a7f7008", "dest": "/var/lib/kolla/config_files/gnocchi_statsd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_statsd.json", "value": {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "b98425b2f26d4e30448a72685b1f89ad", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 470, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580107.39-259996063305682/source", "state": "file", "uid": 0} >2018-06-21 07:21:48,610 p=23396 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'apache:apache', 'path': u'/var/log/horizon/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/etc/openstack-dashboard/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, 'key': '/var/lib/kolla/config_files/horizon.json'}) => {"changed": true, "checksum": "fc55910103403d0bb92e62e940dbd536aff43f84", "dest": "/var/lib/kolla/config_files/horizon.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/horizon.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}}, "md5sum": "77504b6ea1f544f3c70dbc4115bfc354", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 587, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580108.0-82273447005946/source", "state": "file", "uid": 0} >2018-06-21 07:21:48,666 p=23396 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-06-21 07:21:48,678 p=23396 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-21 07:21:48,699 p=23396 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-21 07:21:48,724 p=23396 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-21 07:21:48,747 p=23396 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-06-21 07:21:49,372 p=23396 u=mistral | changed: [controller-0] => (item={'value': [{'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', 'config_volume': u'keystone_init_tasks', 'step_config': u'include ::tripleo::profile::base::keystone', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4'}], 'key': u'step_3'}) => {"changed": true, "checksum": "730e4e048205e1fadc6cd518326d4622d77edad6", "dest": "/var/lib/docker-puppet/docker-puppet-tasks3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]}, "md5sum": "56e31c6a27d11dc618833f5679009c9d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 397, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580108.79-191642059662368/source", "state": "file", "uid": 0} >2018-06-21 07:21:49,394 p=23396 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-06-21 07:21:49,423 p=23396 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:21:49,449 p=23396 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:21:49,463 p=23396 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:21:49,484 p=23396 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-06-21 07:21:50,158 p=23396 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580109.52-140924847393806/source", "state": "file", "uid": 0} >2018-06-21 07:21:50,176 p=23396 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580109.55-244827228049397/source", "state": "file", "uid": 0} >2018-06-21 07:21:50,189 p=23396 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529580109.58-146225757533697/source", "state": "file", "uid": 0} >2018-06-21 07:21:50,215 p=23396 u=mistral | TASK [Run puppet host configuration for step 1] ******************************** >2018-06-21 07:22:04,729 p=23396 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-21 07:22:05,200 p=23396 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-21 07:23:15,763 p=23396 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-21 07:23:15,787 p=23396 u=mistral | TASK [Debug output for task which failed: Run puppet host configuration for step 1] *** >2018-06-21 07:23:15,917 p=23396 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.72 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}c1d92fa159fef3afd721be5f86af886d'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/ensure: created", > "Notice: /Stage[main]/Timezone/Exec[update_timezone]/returns: executed successfully", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}8cd5ea7a71047b590f89d618413c6eb5'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/ensure: created", > "Notice: /Stage[main]/Pacemaker::Service/Service[pcsd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/password: changed password", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/groups: groups changed '' to ['haclient']", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/ensure: created", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/ensure: defined content as '{md5}a839b1ab3552f629efbcc7aaf42e7964'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Service/Service[corosync]/enable: enable changed 'false' to 'true'", > "Notice: /Stage[main]/Pacemaker::Service/Service[pacemaker]/enable: enable changed 'false' to 'true'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: executed successfully", > "Notice: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Pacemaker::Stonith/Pacemaker::Property[Disable STONITH]/Pcmk_property[property--stonith-enabled]/ensure: created", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 75.70 seconds", > "Changes:", > " Total: 166", > "Events:", > " Success: 166", > "Resources:", > " Changed: 165", > " Out of sync: 165", > " Total: 216", > " Restarted: 5", > "Time:", > " Filebucket: 0.00", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " File line: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " User: 0.04", > " Sysctl: 0.15", > " File: 0.18", > " Sysctl runtime: 0.20", > " Package: 0.39", > " Pcmk property: 1.00", > " Firewall: 14.75", > " Last run: 1529580195", > " Service: 2.45", > " Config retrieval: 3.16", > " Exec: 53.67", > " Total: 76.02", > "Version:", > " Config: 1529580116", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-21 07:23:15,942 p=23396 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.89 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}c1d92fa159fef3afd721be5f86af886d'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Timezone/Exec[update_timezone]/returns: executed successfully", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}8cd5ea7a71047b590f89d618413c6eb5'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 6.06 seconds", > "Changes:", > " Total: 98", > "Events:", > " Success: 98", > "Resources:", > " Total: 141", > " Restarted: 3", > " Out of sync: 98", > " Changed: 98", > "Time:", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Cron: 0.00", > " Anchor: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " Sysctl: 0.06", > " File: 0.14", > " Sysctl runtime: 0.17", > " Package: 0.24", > " Exec: 0.82", > " Service: 1.14", > " Last run: 1529580124", > " Config retrieval: 2.19", > " Firewall: 2.29", > " Total: 7.07", > " Filebucket: 0.00", > "Version:", > " Config: 1529580116", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-21 07:23:16,254 p=23396 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 1.87 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}c1d92fa159fef3afd721be5f86af886d'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Timezone/Exec[update_timezone]/returns: executed successfully", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}8cd5ea7a71047b590f89d618413c6eb5'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '65536' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '65536' to '500000'", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_osd]/Tripleo::Firewall::Rule[111 ceph_osd]/Firewall[111 ceph_osd ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_osd]/Tripleo::Firewall::Rule[111 ceph_osd]/Firewall[111 ceph_osd ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 6.76 seconds", > "Changes:", > " Total: 92", > "Events:", > " Success: 92", > "Resources:", > " Total: 135", > " Restarted: 3", > " Out of sync: 92", > " Changed: 92", > "Time:", > " Filebucket: 0.00", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " Sysctl: 0.14", > " File: 0.15", > " Sysctl runtime: 0.18", > " Package: 0.23", > " Service: 1.30", > " Firewall: 1.59", > " Exec: 1.96", > " Last run: 1529580125", > " Config retrieval: 2.19", > " Total: 7.77", > "Version:", > " Config: 1529580116", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-21 07:23:16,280 p=23396 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 1] ***************** >2018-06-21 07:23:36,846 p=23396 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:24:07,440 p=23396 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:25:52,280 p=23396 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:25:52,300 p=23396 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 1] *** >2018-06-21 07:25:52,509 p=23396 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-06-21 11:23:17,117 INFO: 2929 -- Running docker-puppet", > "2018-06-21 11:23:17,118 DEBUG: 2929 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-06-21 11:23:17,118 DEBUG: 2929 -- config_volume ceilometer", > "2018-06-21 11:23:17,118 DEBUG: 2929 -- puppet_tags ceilometer_config", > "2018-06-21 11:23:17,118 DEBUG: 2929 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "", > "2018-06-21 11:23:17,118 DEBUG: 2929 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-21 11:23:17,118 DEBUG: 2929 -- volumes []", > "2018-06-21 11:23:17,118 DEBUG: 2929 -- Adding new service", > "2018-06-21 11:23:17,118 DEBUG: 2929 -- config_volume neutron", > "2018-06-21 11:23:17,119 DEBUG: 2929 -- puppet_tags neutron_plugin_ml2", > "2018-06-21 11:23:17,119 DEBUG: 2929 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "2018-06-21 11:23:17,119 DEBUG: 2929 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-21 11:23:17,119 DEBUG: 2929 -- volumes []", > "2018-06-21 11:23:17,119 DEBUG: 2929 -- Adding new service", > "2018-06-21 11:23:17,119 DEBUG: 2929 -- config_volume neutron", > "2018-06-21 11:23:17,119 DEBUG: 2929 -- puppet_tags neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-06-21 11:23:17,119 DEBUG: 2929 -- manifest include ::tripleo::profile::base::neutron::ovs", > "2018-06-21 11:23:17,119 DEBUG: 2929 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-06-21 11:23:17,119 DEBUG: 2929 -- Existing service, appending puppet tags and manifest", > "2018-06-21 11:23:17,119 DEBUG: 2929 -- config_volume iscsid", > "2018-06-21 11:23:17,119 DEBUG: 2929 -- puppet_tags iscsid_config", > "2018-06-21 11:23:17,119 DEBUG: 2929 -- manifest include ::tripleo::profile::base::iscsid", > "2018-06-21 11:23:17,119 DEBUG: 2929 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-21 11:23:17,119 DEBUG: 2929 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-06-21 11:23:17,119 DEBUG: 2929 -- config_volume nova_libvirt", > "2018-06-21 11:23:17,119 DEBUG: 2929 -- puppet_tags nova_config,nova_paste_api_ini", > "2018-06-21 11:23:17,119 DEBUG: 2929 -- manifest # TODO(emilien): figure how to deal with libvirt profile.", > "# We'll probably treat it like we do with Neutron plugins.", > "# Until then, just include it in the default nova-compute role.", > "include tripleo::profile::base::nova::compute::libvirt", > "include ::tripleo::profile::base::database::mysql::client", > "2018-06-21 11:23:17,120 DEBUG: 2929 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-21 11:23:17,120 DEBUG: 2929 -- volumes []", > "2018-06-21 11:23:17,120 DEBUG: 2929 -- Adding new service", > "2018-06-21 11:23:17,120 DEBUG: 2929 -- config_volume nova_libvirt", > "2018-06-21 11:23:17,120 DEBUG: 2929 -- puppet_tags libvirtd_config,nova_config,file,libvirt_tls_password", > "2018-06-21 11:23:17,120 DEBUG: 2929 -- manifest include tripleo::profile::base::nova::libvirt", > "2018-06-21 11:23:17,120 DEBUG: 2929 -- Existing service, appending puppet tags and manifest", > "2018-06-21 11:23:17,120 DEBUG: 2929 -- puppet_tags ", > "2018-06-21 11:23:17,120 DEBUG: 2929 -- manifest include ::tripleo::profile::base::sshd", > "include tripleo::profile::base::nova::migration::target", > "2018-06-21 11:23:17,120 DEBUG: 2929 -- config_volume crond", > "2018-06-21 11:23:17,120 DEBUG: 2929 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-21 11:23:17,120 DEBUG: 2929 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-21 11:23:17,121 DEBUG: 2929 -- Adding new service", > "2018-06-21 11:23:17,121 INFO: 2929 -- Service compilation completed.", > "2018-06-21 11:23:17,121 DEBUG: 2929 -- - [u'ceilometer', u'file,file_line,concat,augeas,cron,ceilometer_config', u'include ::tripleo::profile::base::ceilometer::agent::polling\\n', u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', []]", > "2018-06-21 11:23:17,121 DEBUG: 2929 -- - [u'nova_libvirt', u'file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password', u\"# TODO(emilien): figure how to deal with libvirt profile.\\n# We'll probably treat it like we do with Neutron plugins.\\n# Until then, just include it in the default nova-compute role.\\ninclude tripleo::profile::base::nova::compute::libvirt\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::libvirt\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::sshd\\ninclude tripleo::profile::base::nova::migration::target\", u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', []]", > "2018-06-21 11:23:17,121 DEBUG: 2929 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', []]", > "2018-06-21 11:23:17,121 DEBUG: 2929 -- - [u'neutron', u'file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2', u'include ::tripleo::profile::base::neutron::plugins::ml2\\n\\ninclude ::tripleo::profile::base::neutron::ovs\\n', u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']]", > "2018-06-21 11:23:17,121 DEBUG: 2929 -- - [u'iscsid', u'file,file_line,concat,augeas,cron,iscsid_config', u'include ::tripleo::profile::base::iscsid', u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', [u'/etc/iscsi:/etc/iscsi']]", > "2018-06-21 11:23:17,121 INFO: 2929 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-06-21 11:23:17,133 INFO: 2930 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-21 11:23:17,133 DEBUG: 2930 -- config_volume ceilometer", > "2018-06-21 11:23:17,133 INFO: 2931 -- Starting configuration of nova_libvirt using image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-21 11:23:17,133 DEBUG: 2930 -- puppet_tags file,file_line,concat,augeas,cron,ceilometer_config", > "2018-06-21 11:23:17,133 DEBUG: 2931 -- config_volume nova_libvirt", > "2018-06-21 11:23:17,133 DEBUG: 2930 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "2018-06-21 11:23:17,133 DEBUG: 2931 -- puppet_tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password", > "2018-06-21 11:23:17,134 DEBUG: 2930 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-21 11:23:17,134 DEBUG: 2931 -- manifest # TODO(emilien): figure how to deal with libvirt profile.", > "include tripleo::profile::base::nova::libvirt", > "include ::tripleo::profile::base::sshd", > "2018-06-21 11:23:17,134 DEBUG: 2930 -- volumes []", > "2018-06-21 11:23:17,134 DEBUG: 2931 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-21 11:23:17,134 DEBUG: 2931 -- volumes []", > "2018-06-21 11:23:17,135 INFO: 2932 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-21 11:23:17,135 DEBUG: 2932 -- config_volume crond", > "2018-06-21 11:23:17,135 DEBUG: 2932 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-06-21 11:23:17,135 DEBUG: 2932 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-21 11:23:17,135 DEBUG: 2932 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-21 11:23:17,135 DEBUG: 2932 -- volumes []", > "2018-06-21 11:23:17,136 INFO: 2930 -- Removing container: docker-puppet-ceilometer", > "2018-06-21 11:23:17,136 INFO: 2931 -- Removing container: docker-puppet-nova_libvirt", > "2018-06-21 11:23:17,138 INFO: 2932 -- Removing container: docker-puppet-crond", > "2018-06-21 11:23:17,225 INFO: 2932 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-21 11:23:17,227 INFO: 2930 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-21 11:23:17,229 INFO: 2931 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-21 11:23:29,818 DEBUG: 2932 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "e0f71f706c2a: Pulling fs layer", > "121ab4741000: Pulling fs layer", > "a8ff0031dfcb: Pulling fs layer", > "a94d9ea04263: Pulling fs layer", > "a94d9ea04263: Waiting", > "121ab4741000: Download complete", > "a8ff0031dfcb: Verifying Checksum", > "a8ff0031dfcb: Download complete", > "a94d9ea04263: Verifying Checksum", > "a94d9ea04263: Download complete", > "e0f71f706c2a: Verifying Checksum", > "e0f71f706c2a: Download complete", > "e0f71f706c2a: Pull complete", > "121ab4741000: Pull complete", > "a8ff0031dfcb: Pull complete", > "a94d9ea04263: Pull complete", > "Digest: sha256:cbc58f1f133447db6c3e634ca05251825f6a2ede8528959b5cd6e0cb1c3de3ba", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-21 11:23:29,822 DEBUG: 2932 -- NET_HOST enabled", > "2018-06-21 11:23:29,822 DEBUG: 2932 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpimhGXc:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-21 11:23:36,916 DEBUG: 2930 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-central ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-central", > "c66228eb2ac7: Pulling fs layer", > "333aa6b2b383: Pulling fs layer", > "1eb9ef5adcb4: Pulling fs layer", > "c66228eb2ac7: Waiting", > "333aa6b2b383: Waiting", > "1eb9ef5adcb4: Waiting", > "c66228eb2ac7: Verifying Checksum", > "c66228eb2ac7: Download complete", > "333aa6b2b383: Download complete", > "1eb9ef5adcb4: Verifying Checksum", > "1eb9ef5adcb4: Download complete", > "c66228eb2ac7: Pull complete", > "333aa6b2b383: Pull complete", > "1eb9ef5adcb4: Pull complete", > "Digest: sha256:3f638e03aaf1d7e303183e06ff1627a5a0efeaef228a7be1e9667ae62d7d6a1b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-21 11:23:36,920 DEBUG: 2930 -- NET_HOST enabled", > "2018-06-21 11:23:36,920 DEBUG: 2930 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-ceilometer --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config --env NAME=ceilometer --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpX14NS6:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-21 11:23:38,478 DEBUG: 2932 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 0.48 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}13ae5d5b43716a32da6855edd3f15758'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.04 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Changed: 2", > " Out of sync: 2", > " Skipped: 7", > " Total: 9", > "Time:", > " File: 0.01", > " Cron: 0.01", > " Config retrieval: 0.57", > " Total: 0.59", > " Last run: 1529580217", > "Version:", > " Config: 1529580216", > " Puppet: 4.8.2", > "Gathering files modified after 2018-06-21 11:23:30.174351358 +0000", > "2018-06-21 11:23:38,478 DEBUG: 2932 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=compute-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:23:30.174351358 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/crond --mtime=1970-01-01", > "+ awk '{print $1}'", > "tar: Removing leading `/' from member names", > "+ md5sum", > "+ tar -c -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-06-21 11:23:38,479 INFO: 2932 -- Removing container: docker-puppet-crond", > "2018-06-21 11:23:38,528 DEBUG: 2932 -- docker-puppet-crond", > "2018-06-21 11:23:38,529 INFO: 2932 -- Finished processing puppet configs for crond", > "2018-06-21 11:23:38,530 INFO: 2932 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-21 11:23:38,530 DEBUG: 2932 -- config_volume neutron", > "2018-06-21 11:23:38,530 DEBUG: 2932 -- puppet_tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-06-21 11:23:38,530 DEBUG: 2932 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "include ::tripleo::profile::base::neutron::ovs", > "2018-06-21 11:23:38,530 DEBUG: 2932 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-21 11:23:38,530 DEBUG: 2932 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-06-21 11:23:38,531 INFO: 2932 -- Removing container: docker-puppet-neutron", > "2018-06-21 11:23:38,635 INFO: 2932 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-21 11:23:43,707 DEBUG: 2932 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-server ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-server", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "ea1d509b6f44: Pulling fs layer", > "e9f9993bb931: Pulling fs layer", > "e9f9993bb931: Verifying Checksum", > "e9f9993bb931: Download complete", > "ea1d509b6f44: Verifying Checksum", > "ea1d509b6f44: Download complete", > "ea1d509b6f44: Pull complete", > "e9f9993bb931: Pull complete", > "Digest: sha256:af12594500608f07f8d38590e2c9b2983e5d81ae8b63aec042f36411b0e76adc", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-21 11:23:43,713 DEBUG: 2932 -- NET_HOST enabled", > "2018-06-21 11:23:43,713 DEBUG: 2932 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-neutron --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 --env NAME=neutron --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpUTxdpF:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-21 11:23:46,241 DEBUG: 2930 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.12 seconds", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[database/event_time_to_live]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[database/metering_time_to_live]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/filter_project]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/archive_policy]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/resources_definition_file]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/interface]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created", > "Notice: Applied catalog in 1.23 seconds", > " Total: 29", > " Success: 29", > " Total: 141", > " Skipped: 22", > " Out of sync: 29", > " Changed: 29", > " Resources: 0.00", > " Ceilometer config: 1.11", > " Config retrieval: 1.32", > " Last run: 1529580225", > " Total: 2.43", > " Config: 1529580222", > "Gathering files modified after 2018-06-21 11:23:37.159494286 +0000", > "2018-06-21 11:23:46,241 DEBUG: 2930 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,ceilometer_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,ceilometer_config'", > "+ origin_of_time=/var/lib/config-data/ceilometer.origin_of_time", > "+ touch /var/lib/config-data/ceilometer.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,ceilometer_config /etc/config.pp", > "Warning: ModuleLoader: module 'ceilometer' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/config.pp\", 35]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer.pp\", 111]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > "Warning: ModuleLoader: module 'oslo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/ceilometer", > "++ stat -c %y /var/lib/config-data/ceilometer.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:23:37.159494286 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/ceilometer", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/ceilometer", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/ceilometer.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/ceilometer --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/ceilometer --mtime=1970-01-01", > "2018-06-21 11:23:46,241 INFO: 2930 -- Removing container: docker-puppet-ceilometer", > "2018-06-21 11:23:46,301 DEBUG: 2930 -- docker-puppet-ceilometer", > "2018-06-21 11:23:46,301 INFO: 2930 -- Finished processing puppet configs for ceilometer", > "2018-06-21 11:23:46,302 INFO: 2930 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-21 11:23:46,302 DEBUG: 2930 -- config_volume iscsid", > "2018-06-21 11:23:46,302 DEBUG: 2930 -- puppet_tags file,file_line,concat,augeas,cron,iscsid_config", > "2018-06-21 11:23:46,302 DEBUG: 2930 -- manifest include ::tripleo::profile::base::iscsid", > "2018-06-21 11:23:46,302 DEBUG: 2930 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-21 11:23:46,302 DEBUG: 2930 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-06-21 11:23:46,302 INFO: 2930 -- Removing container: docker-puppet-iscsid", > "2018-06-21 11:23:46,395 INFO: 2930 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-21 11:23:47,070 DEBUG: 2930 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-iscsid ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-iscsid", > "ab4eae34093d: Pulling fs layer", > "ab4eae34093d: Verifying Checksum", > "ab4eae34093d: Download complete", > "ab4eae34093d: Pull complete", > "Digest: sha256:a46aa93fee87b0f173118da5c2a18dc271772adb839a481ec07f2a53534ac53c", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-21 11:23:47,073 DEBUG: 2930 -- NET_HOST enabled", > "2018-06-21 11:23:47,073 DEBUG: 2930 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-iscsid --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpLGICwF:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/iscsi:/etc/iscsi --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-21 11:23:51,513 DEBUG: 2931 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-compute ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-compute", > "0e3031608420: Pulling fs layer", > "9c13697fe587: Pulling fs layer", > "0e3031608420: Waiting", > "9c13697fe587: Waiting", > "0e3031608420: Verifying Checksum", > "0e3031608420: Download complete", > "9c13697fe587: Verifying Checksum", > "9c13697fe587: Download complete", > "0e3031608420: Pull complete", > "9c13697fe587: Pull complete", > "Digest: sha256:c6b75506ba5602b470f8dbfdcc57e0bcd20fc363d265aa234469343e439fa65a", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-21 11:23:51,517 DEBUG: 2931 -- NET_HOST enabled", > "2018-06-21 11:23:51,517 DEBUG: 2931 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova_libvirt --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp_kg69r:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-21 11:23:53,663 DEBUG: 2932 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.36 seconds", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/vxlan_udp_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_password]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_userid]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created", > "Notice: Applied catalog in 0.78 seconds", > " Total: 48", > " Success: 48", > " Total: 174", > " Skipped: 27", > " Out of sync: 48", > " Changed: 48", > " File: 0.00", > " Neutron plugin ml2: 0.03", > " Neutron agent ovs: 0.06", > " Neutron config: 0.46", > " Last run: 1529580232", > " Config retrieval: 2.58", > " Total: 3.12", > " Config: 1529580229", > "Gathering files modified after 2018-06-21 11:23:43.955629658 +0000", > "2018-06-21 11:23:53,664 DEBUG: 2932 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2'", > "+ origin_of_time=/var/lib/config-data/neutron.origin_of_time", > "+ touch /var/lib/config-data/neutron.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 /etc/config.pp", > "Warning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Scope(Class[Neutron]): neutron::rabbit_host, neutron::rabbit_hosts, neutron::rabbit_password, neutron::rabbit_port, neutron::rabbit_user, neutron::rabbit_virtual_host and neutron::rpc_backend are deprecated. Please use neutron::default_transport_url instead.", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/neutron/manifests/init.pp\", 530]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/plugins/ml2.pp\", 45]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/config.pp\", 132]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp\", 141]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 219]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/ovs.pp\", 59]", > "+ rsync_srcs+=' /var/www'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/neutron", > "++ stat -c %y /var/lib/config-data/neutron.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:23:43.955629658 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/neutron", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/neutron", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/neutron.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/neutron --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/neutron --mtime=1970-01-01", > "2018-06-21 11:23:53,664 INFO: 2932 -- Removing container: docker-puppet-neutron", > "2018-06-21 11:23:53,705 DEBUG: 2932 -- docker-puppet-neutron", > "2018-06-21 11:23:53,705 INFO: 2932 -- Finished processing puppet configs for neutron", > "2018-06-21 11:23:54,008 DEBUG: 2930 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 0.46 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created", > " Total: 10", > " Skipped: 8", > " Exec: 0.02", > " Config retrieval: 0.59", > " Total: 0.61", > " Last run: 1529580233", > " Config: 1529580232", > "Gathering files modified after 2018-06-21 11:23:47.311695186 +0000", > "2018-06-21 11:23:54,009 DEBUG: 2930 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,iscsid_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,iscsid_config'", > "+ origin_of_time=/var/lib/config-data/iscsid.origin_of_time", > "+ touch /var/lib/config-data/iscsid.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,iscsid_config /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/iscsid", > "++ stat -c %y /var/lib/config-data/iscsid.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:23:47.311695186 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/iscsid", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/iscsid", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/iscsid.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/iscsid --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/iscsid --mtime=1970-01-01", > "2018-06-21 11:23:54,009 INFO: 2930 -- Removing container: docker-puppet-iscsid", > "2018-06-21 11:23:54,047 DEBUG: 2930 -- docker-puppet-iscsid", > "2018-06-21 11:23:54,047 INFO: 2930 -- Finished processing puppet configs for iscsid", > "2018-06-21 11:24:07,672 DEBUG: 2931 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.33 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{md5}056b96e7e8124e1bc55f77cba4e68ce7' to '{md5}a5a5f8a3e1fda6c42681ae00f4ddf02d'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{md5}09c4fa846e8e27bfa3ab3325900d63ea' to '{md5}2f138c0278e1b666ec77a6d8ba3054a1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{md5}dff145cb4e519333c0096aae8de2e77c' to '{md5}0a97037bb44fd64d20c1ae93194fa091'", > "Notice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/image_service]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[cinder/catalog_info]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[os_vif_linux_bridge/use_ipv6]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_api_faults]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/os_interface]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/heal_instance_info_cache_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/vncserver_proxyclient_address]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/keymap]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[glance/verify_glance_signatures]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Libvirtd_config[listen_tls]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Libvirtd_config[listen_tcp]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{md5}cfce3c4aa78e4e5b779d7deebcbeb575'", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/vncserver_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_group]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[auth_unix_ro]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[auth_unix_rw]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_ro_perms]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_rw_perms]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}40d961cd3154f0439fcac1a50bd77b96' to '{md5}8f163e7f432aae0a353d7c09f9c0b750'", > "Notice: Applied catalog in 7.48 seconds", > " Total: 103", > " Success: 103", > " Changed: 103", > " Out of sync: 103", > " Total: 313", > " Skipped: 47", > " Concat file: 0.00", > " Concat fragment: 0.00", > " File line: 0.00", > " Libvirtd config: 0.02", > " File: 0.03", > " Package: 0.08", > " Augeas: 0.61", > " Last run: 1529580246", > " Config retrieval: 2.66", > " Nova config: 6.42", > " Total: 9.83", > " Config: 1529580236", > "Gathering files modified after 2018-06-21 11:23:51.710779799 +0000", > "2018-06-21 11:24:07,672 DEBUG: 2931 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password'", > "+ origin_of_time=/var/lib/config-data/nova_libvirt.origin_of_time", > "+ touch /var/lib/config-data/nova_libvirt.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password /etc/config.pp", > "ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute.pp\", 59]", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/config.pp\", 37]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 114]", > "Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release", > "Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/db.pp\", 126]:[\"/etc/puppet/modules/nova/manifests/init.pp\", 530]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 533]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute.pp\", 59]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/placement.pp\", 101]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 138]", > "Warning: Scope(Class[Nova::Placement]): The os_region_name parameter is deprecated and will be removed \\", > "in a future release. Please use region_name instead.", > "Warning: Unknown variable: '::nova::vncproxy::host'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:31:5", > "Warning: Unknown variable: '::nova::vncproxy::vncproxy_protocol'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:36:5", > "Warning: Unknown variable: '::nova::vncproxy::port'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:41:5", > "Warning: Unknown variable: '::nova::vncproxy::vncproxy_path'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:46:5", > "Warning: Unknown variable: '::nova::compute::pci_passthrough'. at /etc/puppet/modules/nova/manifests/compute/pci.pp:19:38", > "Warning: Unknown variable: '::nova::api::default_floating_pool'. at /etc/puppet/modules/nova/manifests/network/neutron.pp:112:38", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/compute/libvirt.pp\", 278]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute/libvirt.pp\", 33]", > " with Stdlib::Compat::Ip_Address. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/migration/target.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/migration/target.pp\", 56]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Exec[set libvirt sasl credentials](provider=posix): Cannot understand environment setting \"TLS_PASSWORD=\"", > "+ rsync_srcs+=' /var/lib/nova/.ssh'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/nova/.ssh /var/lib/config-data/nova_libvirt", > "++ stat -c %y /var/lib/config-data/nova_libvirt.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:23:51.710779799 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova_libvirt", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova_libvirt", > "++ find /etc /root /opt /var/spool/cron /var/lib/nova/.ssh -newer /var/lib/config-data/nova_libvirt.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/nova_libvirt --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/nova_libvirt --mtime=1970-01-01", > "2018-06-21 11:24:07,672 INFO: 2931 -- Removing container: docker-puppet-nova_libvirt", > "2018-06-21 11:24:07,715 DEBUG: 2931 -- docker-puppet-nova_libvirt", > "2018-06-21 11:24:07,715 INFO: 2931 -- Finished processing puppet configs for nova_libvirt", > "2018-06-21 11:24:07,716 DEBUG: 2929 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-06-21 11:24:07,716 DEBUG: 2929 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-06-21 11:24:07,719 DEBUG: 2929 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-21 11:24:07,719 DEBUG: 2929 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-21 11:24:07,719 DEBUG: 2929 -- Updating config hash for neutron_ovs_bridge, config_volume=iscsid hash=36fbc1cede03a4eca918dcd53b1c5f14", > "2018-06-21 11:24:07,719 DEBUG: 2929 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-21 11:24:07,719 DEBUG: 2929 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-21 11:24:07,719 DEBUG: 2929 -- Updating config hash for nova_libvirt, config_volume=iscsid hash=7790ee9fc3b6830620da6ad90e59225d", > "2018-06-21 11:24:07,720 DEBUG: 2929 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-21 11:24:07,720 DEBUG: 2929 -- Updating config hash for nova_virtlogd, config_volume=iscsid hash=7790ee9fc3b6830620da6ad90e59225d", > "2018-06-21 11:24:07,722 DEBUG: 2929 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-21 11:24:07,722 DEBUG: 2929 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-21 11:24:07,722 DEBUG: 2929 -- Updating config hash for ceilometer_agent_compute, config_volume=iscsid hash=6bdd86c68de76bf63e1ff30bd16e16c8", > "2018-06-21 11:24:07,722 DEBUG: 2929 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt/etc", > "2018-06-21 11:24:07,722 DEBUG: 2929 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-21 11:24:07,722 DEBUG: 2929 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-21 11:24:07,723 DEBUG: 2929 -- Updating config hash for neutron_ovs_agent, config_volume=iscsid hash=36fbc1cede03a4eca918dcd53b1c5f14", > "2018-06-21 11:24:07,723 DEBUG: 2929 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-21 11:24:07,723 DEBUG: 2929 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-21 11:24:07,723 DEBUG: 2929 -- Updating config hash for nova_migration_target, config_volume=iscsid hash=7790ee9fc3b6830620da6ad90e59225d", > "2018-06-21 11:24:07,723 DEBUG: 2929 -- Updating config hash for nova_compute, config_volume=iscsid hash=7790ee9fc3b6830620da6ad90e59225d", > "2018-06-21 11:24:07,723 DEBUG: 2929 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-21 11:24:07,723 DEBUG: 2929 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-21 11:24:07,723 DEBUG: 2929 -- Updating config hash for logrotate_crond, config_volume=iscsid hash=f4bb95a3a3639b04976a36bd4464fd87" > ] >} >2018-06-21 07:25:52,523 p=23396 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-06-21 11:23:17,150 INFO: 30964 -- Running docker-puppet", > "2018-06-21 11:23:17,150 DEBUG: 30964 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-06-21 11:23:17,150 DEBUG: 30964 -- config_volume crond", > "2018-06-21 11:23:17,150 DEBUG: 30964 -- puppet_tags ", > "2018-06-21 11:23:17,150 DEBUG: 30964 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-21 11:23:17,150 DEBUG: 30964 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-21 11:23:17,151 DEBUG: 30964 -- volumes []", > "2018-06-21 11:23:17,151 DEBUG: 30964 -- Adding new service", > "2018-06-21 11:23:17,151 INFO: 30964 -- Service compilation completed.", > "2018-06-21 11:23:17,152 DEBUG: 30964 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', []]", > "2018-06-21 11:23:17,152 INFO: 30964 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-06-21 11:23:17,162 INFO: 30965 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-21 11:23:17,163 DEBUG: 30965 -- config_volume crond", > "2018-06-21 11:23:17,163 DEBUG: 30965 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-06-21 11:23:17,163 DEBUG: 30965 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-21 11:23:17,163 DEBUG: 30965 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-21 11:23:17,163 DEBUG: 30965 -- volumes []", > "2018-06-21 11:23:17,165 INFO: 30965 -- Removing container: docker-puppet-crond", > "2018-06-21 11:23:17,252 INFO: 30965 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-21 11:23:29,739 DEBUG: 30965 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "e0f71f706c2a: Pulling fs layer", > "121ab4741000: Pulling fs layer", > "a8ff0031dfcb: Pulling fs layer", > "a94d9ea04263: Pulling fs layer", > "a94d9ea04263: Waiting", > "121ab4741000: Verifying Checksum", > "121ab4741000: Download complete", > "a94d9ea04263: Verifying Checksum", > "a94d9ea04263: Download complete", > "a8ff0031dfcb: Verifying Checksum", > "a8ff0031dfcb: Download complete", > "e0f71f706c2a: Verifying Checksum", > "e0f71f706c2a: Download complete", > "e0f71f706c2a: Pull complete", > "121ab4741000: Pull complete", > "a8ff0031dfcb: Pull complete", > "a94d9ea04263: Pull complete", > "Digest: sha256:cbc58f1f133447db6c3e634ca05251825f6a2ede8528959b5cd6e0cb1c3de3ba", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "", > "2018-06-21 11:23:29,742 DEBUG: 30965 -- NET_HOST enabled", > "2018-06-21 11:23:29,742 DEBUG: 30965 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=ceph-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpVoYFBR:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-21 11:23:37,074 DEBUG: 30965 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 0.56 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}13ae5d5b43716a32da6855edd3f15758'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.04 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Changed: 2", > " Out of sync: 2", > " Skipped: 7", > " Total: 9", > "Time:", > " File: 0.00", > " Cron: 0.01", > " Config retrieval: 0.63", > " Total: 0.64", > " Last run: 1529580216", > "Version:", > " Config: 1529580215", > " Puppet: 4.8.2", > "Gathering files modified after 2018-06-21 11:23:29.974882749 +0000", > "2018-06-21 11:23:37,074 DEBUG: 30965 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=ceph-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:23:29.974882749 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/crond --mtime=1970-01-01", > "+ md5sum", > "+ awk '{print $1}'", > "tar: Removing leading `/' from member names", > "+ tar -c -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-06-21 11:23:37,074 INFO: 30965 -- Removing container: docker-puppet-crond", > "2018-06-21 11:23:37,120 DEBUG: 30965 -- docker-puppet-crond", > "2018-06-21 11:23:37,120 INFO: 30965 -- Finished processing puppet configs for crond", > "2018-06-21 11:23:37,122 DEBUG: 30964 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-06-21 11:23:37,122 DEBUG: 30964 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-06-21 11:23:37,125 DEBUG: 30964 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-21 11:23:37,126 DEBUG: 30964 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-21 11:23:37,126 DEBUG: 30964 -- Updating config hash for logrotate_crond, config_volume=crond hash=3cd7b28ed74c7f2392b4522ab6db6dd7" > ] >} >2018-06-21 07:25:53,353 p=23396 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-06-21 11:23:17,102 INFO: 36347 -- Running docker-puppet", > "2018-06-21 11:23:17,102 DEBUG: 36347 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-06-21 11:23:17,103 DEBUG: 36347 -- config_volume aodh", > "2018-06-21 11:23:17,103 DEBUG: 36347 -- puppet_tags aodh_api_paste_ini,aodh_config", > "2018-06-21 11:23:17,103 DEBUG: 36347 -- manifest include tripleo::profile::base::aodh::api", > "", > "include ::tripleo::profile::base::database::mysql::client", > "2018-06-21 11:23:17,103 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-21 11:23:17,103 DEBUG: 36347 -- volumes []", > "2018-06-21 11:23:17,103 DEBUG: 36347 -- Adding new service", > "2018-06-21 11:23:17,103 DEBUG: 36347 -- puppet_tags aodh_config", > "2018-06-21 11:23:17,103 DEBUG: 36347 -- manifest include tripleo::profile::base::aodh::evaluator", > "2018-06-21 11:23:17,104 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-21 11:23:17,104 DEBUG: 36347 -- volumes []", > "2018-06-21 11:23:17,104 DEBUG: 36347 -- Existing service, appending puppet tags and manifest", > "2018-06-21 11:23:17,104 DEBUG: 36347 -- config_volume aodh", > "2018-06-21 11:23:17,104 DEBUG: 36347 -- puppet_tags aodh_config", > "2018-06-21 11:23:17,104 DEBUG: 36347 -- manifest include tripleo::profile::base::aodh::listener", > "2018-06-21 11:23:17,104 DEBUG: 36347 -- manifest include tripleo::profile::base::aodh::notifier", > "2018-06-21 11:23:17,104 DEBUG: 36347 -- config_volume ceilometer", > "2018-06-21 11:23:17,104 DEBUG: 36347 -- puppet_tags ceilometer_config", > "2018-06-21 11:23:17,104 DEBUG: 36347 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "2018-06-21 11:23:17,104 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-21 11:23:17,105 DEBUG: 36347 -- Adding new service", > "2018-06-21 11:23:17,105 DEBUG: 36347 -- config_volume ceilometer", > "2018-06-21 11:23:17,105 DEBUG: 36347 -- puppet_tags ceilometer_config", > "2018-06-21 11:23:17,105 DEBUG: 36347 -- manifest include ::tripleo::profile::base::ceilometer::agent::notification", > "2018-06-21 11:23:17,105 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-21 11:23:17,105 DEBUG: 36347 -- volumes []", > "2018-06-21 11:23:17,105 DEBUG: 36347 -- Existing service, appending puppet tags and manifest", > "2018-06-21 11:23:17,105 DEBUG: 36347 -- config_volume cinder", > "2018-06-21 11:23:17,105 DEBUG: 36347 -- puppet_tags cinder_config,file,concat,file_line", > "2018-06-21 11:23:17,105 DEBUG: 36347 -- manifest include ::tripleo::profile::base::cinder::api", > "2018-06-21 11:23:17,105 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-21 11:23:17,105 DEBUG: 36347 -- manifest include ::tripleo::profile::base::cinder::backup::ceph", > "2018-06-21 11:23:17,106 DEBUG: 36347 -- manifest include ::tripleo::profile::base::cinder::scheduler", > "2018-06-21 11:23:17,106 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-21 11:23:17,106 DEBUG: 36347 -- volumes []", > "2018-06-21 11:23:17,106 DEBUG: 36347 -- Existing service, appending puppet tags and manifest", > "2018-06-21 11:23:17,106 DEBUG: 36347 -- config_volume cinder", > "2018-06-21 11:23:17,106 DEBUG: 36347 -- puppet_tags cinder_config,file,concat,file_line", > "2018-06-21 11:23:17,106 DEBUG: 36347 -- manifest include ::tripleo::profile::base::lvm", > "include ::tripleo::profile::base::cinder::volume", > "2018-06-21 11:23:17,106 DEBUG: 36347 -- config_volume clustercheck", > "2018-06-21 11:23:17,106 DEBUG: 36347 -- puppet_tags file", > "2018-06-21 11:23:17,106 DEBUG: 36347 -- manifest include ::tripleo::profile::pacemaker::clustercheck", > "2018-06-21 11:23:17,106 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-21 11:23:17,106 DEBUG: 36347 -- Adding new service", > "2018-06-21 11:23:17,106 DEBUG: 36347 -- config_volume glance_api", > "2018-06-21 11:23:17,106 DEBUG: 36347 -- puppet_tags glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config", > "2018-06-21 11:23:17,106 DEBUG: 36347 -- manifest include ::tripleo::profile::base::glance::api", > "2018-06-21 11:23:17,106 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-21 11:23:17,107 DEBUG: 36347 -- config_volume gnocchi", > "2018-06-21 11:23:17,107 DEBUG: 36347 -- puppet_tags gnocchi_api_paste_ini,gnocchi_config", > "2018-06-21 11:23:17,107 DEBUG: 36347 -- manifest include ::tripleo::profile::base::gnocchi::api", > "2018-06-21 11:23:17,107 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-21 11:23:17,107 DEBUG: 36347 -- volumes []", > "2018-06-21 11:23:17,107 DEBUG: 36347 -- Adding new service", > "2018-06-21 11:23:17,107 DEBUG: 36347 -- puppet_tags gnocchi_config", > "2018-06-21 11:23:17,107 DEBUG: 36347 -- manifest include ::tripleo::profile::base::gnocchi::metricd", > "2018-06-21 11:23:17,107 DEBUG: 36347 -- Existing service, appending puppet tags and manifest", > "2018-06-21 11:23:17,107 DEBUG: 36347 -- manifest include ::tripleo::profile::base::gnocchi::statsd", > "2018-06-21 11:23:17,107 DEBUG: 36347 -- config_volume haproxy", > "2018-06-21 11:23:17,107 DEBUG: 36347 -- puppet_tags haproxy_config", > "2018-06-21 11:23:17,108 DEBUG: 36347 -- manifest exec {'wait-for-settle': command => '/bin/true' }", > "class tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}", > "['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::pacemaker::haproxy_bundle", > "2018-06-21 11:23:17,108 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-21 11:23:17,108 DEBUG: 36347 -- volumes [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']", > "2018-06-21 11:23:17,108 DEBUG: 36347 -- Adding new service", > "2018-06-21 11:23:17,108 DEBUG: 36347 -- config_volume heat_api", > "2018-06-21 11:23:17,108 DEBUG: 36347 -- puppet_tags heat_config,file,concat,file_line", > "2018-06-21 11:23:17,108 DEBUG: 36347 -- manifest include ::tripleo::profile::base::heat::api", > "2018-06-21 11:23:17,108 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-21 11:23:17,108 DEBUG: 36347 -- volumes []", > "2018-06-21 11:23:17,108 DEBUG: 36347 -- config_volume heat_api_cfn", > "2018-06-21 11:23:17,108 DEBUG: 36347 -- manifest include ::tripleo::profile::base::heat::api_cfn", > "2018-06-21 11:23:17,108 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-21 11:23:17,108 DEBUG: 36347 -- config_volume heat", > "2018-06-21 11:23:17,108 DEBUG: 36347 -- manifest include ::tripleo::profile::base::heat::engine", > "2018-06-21 11:23:17,109 DEBUG: 36347 -- volumes []", > "2018-06-21 11:23:17,109 DEBUG: 36347 -- Adding new service", > "2018-06-21 11:23:17,109 DEBUG: 36347 -- config_volume horizon", > "2018-06-21 11:23:17,109 DEBUG: 36347 -- puppet_tags horizon_config", > "2018-06-21 11:23:17,109 DEBUG: 36347 -- manifest include ::tripleo::profile::base::horizon", > "2018-06-21 11:23:17,109 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-21 11:23:17,109 DEBUG: 36347 -- config_volume iscsid", > "2018-06-21 11:23:17,109 DEBUG: 36347 -- puppet_tags iscsid_config", > "2018-06-21 11:23:17,109 DEBUG: 36347 -- manifest include ::tripleo::profile::base::iscsid", > "2018-06-21 11:23:17,109 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-21 11:23:17,109 DEBUG: 36347 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-06-21 11:23:17,109 DEBUG: 36347 -- config_volume keystone", > "2018-06-21 11:23:17,109 DEBUG: 36347 -- puppet_tags keystone_config,keystone_domain_config", > "2018-06-21 11:23:17,109 DEBUG: 36347 -- manifest ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::base::keystone", > "2018-06-21 11:23:17,109 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-21 11:23:17,110 DEBUG: 36347 -- config_volume memcached", > "2018-06-21 11:23:17,110 DEBUG: 36347 -- puppet_tags file", > "2018-06-21 11:23:17,110 DEBUG: 36347 -- manifest include ::tripleo::profile::base::memcached", > "2018-06-21 11:23:17,110 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-21 11:23:17,110 DEBUG: 36347 -- volumes []", > "2018-06-21 11:23:17,110 DEBUG: 36347 -- Adding new service", > "2018-06-21 11:23:17,110 DEBUG: 36347 -- config_volume mysql", > "2018-06-21 11:23:17,110 DEBUG: 36347 -- manifest ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }", > "exec {'wait-for-settle': command => '/bin/true' }", > "include ::tripleo::profile::pacemaker::database::mysql_bundle", > "2018-06-21 11:23:17,110 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-21 11:23:17,110 DEBUG: 36347 -- config_volume neutron", > "2018-06-21 11:23:17,110 DEBUG: 36347 -- puppet_tags neutron_config,neutron_api_config", > "2018-06-21 11:23:17,110 DEBUG: 36347 -- manifest include tripleo::profile::base::neutron::server", > "2018-06-21 11:23:17,110 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-21 11:23:17,111 DEBUG: 36347 -- puppet_tags neutron_plugin_ml2", > "2018-06-21 11:23:17,111 DEBUG: 36347 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "2018-06-21 11:23:17,111 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-21 11:23:17,111 DEBUG: 36347 -- volumes []", > "2018-06-21 11:23:17,111 DEBUG: 36347 -- Existing service, appending puppet tags and manifest", > "2018-06-21 11:23:17,111 DEBUG: 36347 -- config_volume neutron", > "2018-06-21 11:23:17,111 DEBUG: 36347 -- puppet_tags neutron_config,neutron_dhcp_agent_config", > "2018-06-21 11:23:17,111 DEBUG: 36347 -- manifest include tripleo::profile::base::neutron::dhcp", > "2018-06-21 11:23:17,111 DEBUG: 36347 -- puppet_tags neutron_config,neutron_l3_agent_config", > "2018-06-21 11:23:17,111 DEBUG: 36347 -- manifest include tripleo::profile::base::neutron::l3", > "2018-06-21 11:23:17,111 DEBUG: 36347 -- puppet_tags neutron_config,neutron_metadata_agent_config", > "2018-06-21 11:23:17,111 DEBUG: 36347 -- manifest include tripleo::profile::base::neutron::metadata", > "2018-06-21 11:23:17,112 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-21 11:23:17,112 DEBUG: 36347 -- volumes []", > "2018-06-21 11:23:17,112 DEBUG: 36347 -- Existing service, appending puppet tags and manifest", > "2018-06-21 11:23:17,112 DEBUG: 36347 -- config_volume neutron", > "2018-06-21 11:23:17,112 DEBUG: 36347 -- puppet_tags neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-06-21 11:23:17,112 DEBUG: 36347 -- manifest include ::tripleo::profile::base::neutron::ovs", > "2018-06-21 11:23:17,112 DEBUG: 36347 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-06-21 11:23:17,112 DEBUG: 36347 -- config_volume nova", > "2018-06-21 11:23:17,112 DEBUG: 36347 -- puppet_tags nova_config", > "2018-06-21 11:23:17,112 DEBUG: 36347 -- manifest ['Nova_cell_v2'].each |String $val| { noop_resource($val) }", > "include tripleo::profile::base::nova::api", > "2018-06-21 11:23:17,112 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-21 11:23:17,112 DEBUG: 36347 -- Adding new service", > "2018-06-21 11:23:17,112 DEBUG: 36347 -- manifest include tripleo::profile::base::nova::conductor", > "2018-06-21 11:23:17,113 DEBUG: 36347 -- puppet_tags nova_config", > "2018-06-21 11:23:17,113 DEBUG: 36347 -- manifest include tripleo::profile::base::nova::consoleauth", > "2018-06-21 11:23:17,113 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-21 11:23:17,113 DEBUG: 36347 -- volumes []", > "2018-06-21 11:23:17,113 DEBUG: 36347 -- Existing service, appending puppet tags and manifest", > "2018-06-21 11:23:17,113 DEBUG: 36347 -- config_volume nova_placement", > "2018-06-21 11:23:17,113 DEBUG: 36347 -- manifest include tripleo::profile::base::nova::placement", > "2018-06-21 11:23:17,113 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-21 11:23:17,113 DEBUG: 36347 -- Adding new service", > "2018-06-21 11:23:17,113 DEBUG: 36347 -- config_volume nova", > "2018-06-21 11:23:17,113 DEBUG: 36347 -- manifest include tripleo::profile::base::nova::scheduler", > "2018-06-21 11:23:17,113 DEBUG: 36347 -- manifest include tripleo::profile::base::nova::vncproxy", > "2018-06-21 11:23:17,114 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-21 11:23:17,114 DEBUG: 36347 -- volumes []", > "2018-06-21 11:23:17,114 DEBUG: 36347 -- Existing service, appending puppet tags and manifest", > "2018-06-21 11:23:17,114 DEBUG: 36347 -- config_volume crond", > "2018-06-21 11:23:17,114 DEBUG: 36347 -- puppet_tags ", > "2018-06-21 11:23:17,114 DEBUG: 36347 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-21 11:23:17,114 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-21 11:23:17,114 DEBUG: 36347 -- Adding new service", > "2018-06-21 11:23:17,114 DEBUG: 36347 -- config_volume panko", > "2018-06-21 11:23:17,114 DEBUG: 36347 -- puppet_tags panko_api_paste_ini,panko_config", > "2018-06-21 11:23:17,114 DEBUG: 36347 -- manifest include tripleo::profile::base::panko::api", > "2018-06-21 11:23:17,114 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-21 11:23:17,114 DEBUG: 36347 -- config_volume rabbitmq", > "2018-06-21 11:23:17,114 DEBUG: 36347 -- puppet_tags file", > "2018-06-21 11:23:17,114 DEBUG: 36347 -- manifest ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::base::rabbitmq", > "2018-06-21 11:23:17,114 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-21 11:23:17,114 DEBUG: 36347 -- config_volume redis", > "2018-06-21 11:23:17,114 DEBUG: 36347 -- puppet_tags exec", > "2018-06-21 11:23:17,115 DEBUG: 36347 -- manifest include ::tripleo::profile::pacemaker::database::redis_bundle", > "2018-06-21 11:23:17,115 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-21 11:23:17,115 DEBUG: 36347 -- volumes []", > "2018-06-21 11:23:17,115 DEBUG: 36347 -- Adding new service", > "2018-06-21 11:23:17,115 DEBUG: 36347 -- config_volume sahara", > "2018-06-21 11:23:17,115 DEBUG: 36347 -- puppet_tags sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-06-21 11:23:17,115 DEBUG: 36347 -- manifest include ::tripleo::profile::base::sahara::api", > "2018-06-21 11:23:17,115 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-21 11:23:17,115 DEBUG: 36347 -- puppet_tags sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-06-21 11:23:17,115 DEBUG: 36347 -- manifest include ::tripleo::profile::base::sahara::engine", > "2018-06-21 11:23:17,115 DEBUG: 36347 -- Existing service, appending puppet tags and manifest", > "2018-06-21 11:23:17,115 DEBUG: 36347 -- config_volume swift", > "2018-06-21 11:23:17,115 DEBUG: 36347 -- puppet_tags swift_config,swift_proxy_config,swift_keymaster_config", > "2018-06-21 11:23:17,115 DEBUG: 36347 -- manifest include ::tripleo::profile::base::swift::proxy", > "2018-06-21 11:23:17,115 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-21 11:23:17,116 DEBUG: 36347 -- volumes []", > "2018-06-21 11:23:17,116 DEBUG: 36347 -- Adding new service", > "2018-06-21 11:23:17,116 DEBUG: 36347 -- config_volume swift_ringbuilder", > "2018-06-21 11:23:17,116 DEBUG: 36347 -- puppet_tags exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball", > "2018-06-21 11:23:17,116 DEBUG: 36347 -- manifest include ::tripleo::profile::base::swift::ringbuilder", > "2018-06-21 11:23:17,116 DEBUG: 36347 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-21 11:23:17,116 DEBUG: 36347 -- config_volume swift", > "2018-06-21 11:23:17,116 DEBUG: 36347 -- puppet_tags swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server", > "2018-06-21 11:23:17,116 DEBUG: 36347 -- manifest include ::tripleo::profile::base::swift::storage", > "class xinetd() {}", > "2018-06-21 11:23:17,116 DEBUG: 36347 -- Existing service, appending puppet tags and manifest", > "2018-06-21 11:23:17,116 INFO: 36347 -- Service compilation completed.", > "2018-06-21 11:23:17,117 DEBUG: 36347 -- - [u'nova_placement', u'file,file_line,concat,augeas,cron,nova_config', u'include tripleo::profile::base::nova::placement\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', []]", > "2018-06-21 11:23:17,117 DEBUG: 36347 -- - [u'aodh', u'file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config', u'include tripleo::profile::base::aodh::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::evaluator\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::listener\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::notifier\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', []]", > "2018-06-21 11:23:17,117 DEBUG: 36347 -- - [u'heat_api', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::api\\n', u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', []]", > "2018-06-21 11:23:17,117 DEBUG: 36347 -- - [u'swift_ringbuilder', u'file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball', u'include ::tripleo::profile::base::swift::ringbuilder', u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', []]", > "2018-06-21 11:23:17,117 DEBUG: 36347 -- - [u'sahara', u'file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template', u'include ::tripleo::profile::base::sahara::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::sahara::engine\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', []]", > "2018-06-21 11:23:17,117 DEBUG: 36347 -- - [u'mysql', u'file,file_line,concat,augeas,cron,file', u\"['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }\\nexec {'wait-for-settle': command => '/bin/true' }\\ninclude ::tripleo::profile::pacemaker::database::mysql_bundle\", u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', []]", > "2018-06-21 11:23:17,117 DEBUG: 36347 -- - [u'gnocchi', u'file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config', u'include ::tripleo::profile::base::gnocchi::api\\n\\ninclude ::tripleo::profile::base::gnocchi::metricd\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::gnocchi::statsd\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', []]", > "2018-06-21 11:23:17,117 DEBUG: 36347 -- - [u'clustercheck', u'file,file_line,concat,augeas,cron,file', u'include ::tripleo::profile::pacemaker::clustercheck', u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', []]", > "2018-06-21 11:23:17,117 DEBUG: 36347 -- - [u'redis', u'file,file_line,concat,augeas,cron,exec', u'include ::tripleo::profile::pacemaker::database::redis_bundle', u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', []]", > "2018-06-21 11:23:17,117 DEBUG: 36347 -- - [u'nova', u'file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config', u\"['Nova_cell_v2'].each |String $val| { noop_resource($val) }\\ninclude tripleo::profile::base::nova::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::conductor\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::consoleauth\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::scheduler\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::vncproxy\\n\\ninclude ::tripleo::profile::base::database::mysql::client\", u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', []]", > "2018-06-21 11:23:17,117 DEBUG: 36347 -- - [u'iscsid', u'file,file_line,concat,augeas,cron,iscsid_config', u'include ::tripleo::profile::base::iscsid', u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', [u'/etc/iscsi:/etc/iscsi']]", > "2018-06-21 11:23:17,117 DEBUG: 36347 -- - [u'glance_api', u'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config', u'include ::tripleo::profile::base::glance::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', []]", > "2018-06-21 11:23:17,117 DEBUG: 36347 -- - [u'keystone', u'file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config', u\"['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::base::keystone\\n\\ninclude ::tripleo::profile::base::database::mysql::client\", u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', []]", > "2018-06-21 11:23:17,117 DEBUG: 36347 -- - [u'memcached', u'file,file_line,concat,augeas,cron,file', u'include ::tripleo::profile::base::memcached\\n', u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', []]", > "2018-06-21 11:23:17,118 DEBUG: 36347 -- - [u'panko', u'file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config', u'include tripleo::profile::base::panko::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', []]", > "2018-06-21 11:23:17,118 DEBUG: 36347 -- - [u'heat', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::engine\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', []]", > "2018-06-21 11:23:17,118 DEBUG: 36347 -- - [u'cinder', u'file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line', u'include ::tripleo::profile::base::cinder::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::cinder::backup::ceph\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::cinder::scheduler\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::lvm\\ninclude ::tripleo::profile::base::cinder::volume\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', []]", > "2018-06-21 11:23:17,118 DEBUG: 36347 -- - [u'swift', u'file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server', u'include ::tripleo::profile::base::swift::proxy\\n\\ninclude ::tripleo::profile::base::swift::storage\\n\\nclass xinetd() {}', u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', []]", > "2018-06-21 11:23:17,118 DEBUG: 36347 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', []]", > "2018-06-21 11:23:17,118 DEBUG: 36347 -- - [u'haproxy', u'file,file_line,concat,augeas,cron,haproxy_config', u\"exec {'wait-for-settle': command => '/bin/true' }\\nclass tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}\\n['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::pacemaker::haproxy_bundle\", u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']]", > "2018-06-21 11:23:17,118 DEBUG: 36347 -- - [u'ceilometer', u'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', u'include ::tripleo::profile::base::ceilometer::agent::polling\\n\\ninclude ::tripleo::profile::base::ceilometer::agent::notification\\n', u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', []]", > "2018-06-21 11:23:17,118 DEBUG: 36347 -- - [u'rabbitmq', u'file,file_line,concat,augeas,cron,file', u\"['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::base::rabbitmq\\n\", u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', []]", > "2018-06-21 11:23:17,118 DEBUG: 36347 -- - [u'neutron', u'file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2', u'include tripleo::profile::base::neutron::server\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::neutron::plugins::ml2\\n\\ninclude tripleo::profile::base::neutron::dhcp\\n\\ninclude tripleo::profile::base::neutron::l3\\n\\ninclude tripleo::profile::base::neutron::metadata\\n\\ninclude ::tripleo::profile::base::neutron::ovs\\n', u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']]", > "2018-06-21 11:23:17,118 DEBUG: 36347 -- - [u'horizon', u'file,file_line,concat,augeas,cron,horizon_config', u'include ::tripleo::profile::base::horizon\\n', u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', []]", > "2018-06-21 11:23:17,118 DEBUG: 36347 -- - [u'heat_api_cfn', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::api_cfn\\n', u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4', []]", > "2018-06-21 11:23:17,118 INFO: 36347 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-06-21 11:23:17,129 INFO: 36348 -- Starting configuration of nova_placement using image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-21 11:23:17,130 DEBUG: 36348 -- config_volume nova_placement", > "2018-06-21 11:23:17,129 INFO: 36349 -- Starting configuration of swift_ringbuilder using image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-21 11:23:17,130 DEBUG: 36348 -- puppet_tags file,file_line,concat,augeas,cron,nova_config", > "2018-06-21 11:23:17,130 DEBUG: 36348 -- manifest include tripleo::profile::base::nova::placement", > "2018-06-21 11:23:17,130 INFO: 36350 -- Starting configuration of gnocchi using image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-21 11:23:17,130 DEBUG: 36349 -- config_volume swift_ringbuilder", > "2018-06-21 11:23:17,130 DEBUG: 36348 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-21 11:23:17,130 DEBUG: 36350 -- config_volume gnocchi", > "2018-06-21 11:23:17,130 DEBUG: 36349 -- puppet_tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball", > "2018-06-21 11:23:17,130 DEBUG: 36348 -- volumes []", > "2018-06-21 11:23:17,130 DEBUG: 36350 -- puppet_tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config", > "2018-06-21 11:23:17,130 DEBUG: 36349 -- manifest include ::tripleo::profile::base::swift::ringbuilder", > "2018-06-21 11:23:17,130 DEBUG: 36349 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-21 11:23:17,130 DEBUG: 36350 -- manifest include ::tripleo::profile::base::gnocchi::api", > "include ::tripleo::profile::base::gnocchi::metricd", > "include ::tripleo::profile::base::gnocchi::statsd", > "2018-06-21 11:23:17,130 DEBUG: 36349 -- volumes []", > "2018-06-21 11:23:17,130 DEBUG: 36350 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-21 11:23:17,130 DEBUG: 36350 -- volumes []", > "2018-06-21 11:23:17,131 INFO: 36348 -- Removing container: docker-puppet-nova_placement", > "2018-06-21 11:23:17,131 INFO: 36349 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-06-21 11:23:17,131 INFO: 36350 -- Removing container: docker-puppet-gnocchi", > "2018-06-21 11:23:17,223 INFO: 36350 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-21 11:23:17,224 INFO: 36348 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-21 11:23:17,227 INFO: 36349 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-21 11:23:36,859 DEBUG: 36349 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server", > "e0f71f706c2a: Pulling fs layer", > "121ab4741000: Pulling fs layer", > "a8ff0031dfcb: Pulling fs layer", > "c66228eb2ac7: Pulling fs layer", > "a98c7da29d65: Pulling fs layer", > "c4603b657b73: Pulling fs layer", > "c66228eb2ac7: Waiting", > "a98c7da29d65: Waiting", > "c4603b657b73: Waiting", > "121ab4741000: Verifying Checksum", > "121ab4741000: Download complete", > "c66228eb2ac7: Verifying Checksum", > "c66228eb2ac7: Download complete", > "a8ff0031dfcb: Verifying Checksum", > "a8ff0031dfcb: Download complete", > "a98c7da29d65: Verifying Checksum", > "a98c7da29d65: Download complete", > "e0f71f706c2a: Verifying Checksum", > "e0f71f706c2a: Download complete", > "c4603b657b73: Verifying Checksum", > "c4603b657b73: Download complete", > "e0f71f706c2a: Pull complete", > "121ab4741000: Pull complete", > "a8ff0031dfcb: Pull complete", > "c66228eb2ac7: Pull complete", > "a98c7da29d65: Pull complete", > "c4603b657b73: Pull complete", > "Digest: sha256:632f29598f1ea7b96a5573d0b5a942b3a1f571783804cdc07dac0910e97d1a87", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-21 11:23:36,863 DEBUG: 36349 -- NET_HOST enabled", > "2018-06-21 11:23:36,863 DEBUG: 36349 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-swift_ringbuilder --env PUPPET_TAGS=file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball --env NAME=swift_ringbuilder --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpYRgDkB:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-21 11:23:40,758 DEBUG: 36348 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-placement-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-placement-api", > "0e3031608420: Pulling fs layer", > "dd9c4679b681: Pulling fs layer", > "0e3031608420: Waiting", > "dd9c4679b681: Waiting", > "dd9c4679b681: Verifying Checksum", > "dd9c4679b681: Download complete", > "0e3031608420: Verifying Checksum", > "0e3031608420: Download complete", > "0e3031608420: Pull complete", > "dd9c4679b681: Pull complete", > "Digest: sha256:2336d644bd74c35fe7e050376f6d7a1b718ae6faf3556cf63917aceecdf581b6", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-21 11:23:40,763 DEBUG: 36348 -- NET_HOST enabled", > "2018-06-21 11:23:40,763 DEBUG: 36348 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova_placement --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config --env NAME=nova_placement --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpIR1_49:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-21 11:23:43,635 DEBUG: 36350 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-gnocchi-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-gnocchi-api", > "64612d8109ce: Pulling fs layer", > "2d8b51759f9c: Pulling fs layer", > "64612d8109ce: Waiting", > "2d8b51759f9c: Waiting", > "2d8b51759f9c: Verifying Checksum", > "2d8b51759f9c: Download complete", > "64612d8109ce: Download complete", > "64612d8109ce: Pull complete", > "2d8b51759f9c: Pull complete", > "Digest: sha256:0824e3fa2c22ac0acb43883a29cce2fbdf54a9cce722e559cc5c6325e46c2142", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-21 11:23:43,638 DEBUG: 36350 -- NET_HOST enabled", > "2018-06-21 11:23:43,638 DEBUG: 36350 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-gnocchi --env PUPPET_TAGS=file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config --env NAME=gnocchi --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpoej0im:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-21 11:23:50,588 DEBUG: 36349 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.16 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[fetch_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[extract_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[extract_swift_ring_tarball]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Swift/File[/var/lib/swift]/group: group changed 'root' to 'swift'", > "Notice: /Stage[main]/Swift/File[/etc/swift/swift.conf]/owner: owner changed 'root' to 'swift'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[object]/Exec[create_object]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[account]/Exec[create_account]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[container]/Exec[create_container]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.17:%PORT%/d1]/Ring_object_device[172.17.4.17:6000/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.17:%PORT%/d1]/Ring_container_device[172.17.4.17:6001/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.17:%PORT%/d1]/Ring_account_device[172.17.4.17:6002/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[object]/Exec[rebalance_object]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[account]/Exec[rebalance_account]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[container]/Exec[rebalance_container]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[create_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[create_swift_ring_tarball]: Triggered 'refresh' from 3 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[upload_swift_ring_tarball]: Triggered 'refresh' from 2 events", > "Notice: Applied catalog in 4.66 seconds", > "Changes:", > " Total: 11", > "Events:", > " Success: 11", > "Resources:", > " Changed: 11", > " Out of sync: 11", > " Skipped: 19", > " Total: 36", > " Restarted: 6", > "Time:", > " File: 0.00", > " Ring object device: 0.54", > " Ring container device: 0.57", > " Ring account device: 0.58", > " Config retrieval: 1.33", > " Exec: 1.39", > " Last run: 1529580229", > " Total: 4.42", > "Version:", > " Config: 1529580223", > " Puppet: 4.8.2", > "Gathering files modified after 2018-06-21 11:23:37.214993659 +0000", > "2018-06-21 11:23:50,589 DEBUG: 36349 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball'", > "+ origin_of_time=/var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ touch /var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=controller-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball /etc/config.pp", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "Warning: ModuleLoader: module 'swift' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/ringbuilder.pp\", 113]:[\"/etc/config.pp\", 2]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/ringbuilder/create.pp\", 44]:", > "Warning: Unexpected line: Ring file /etc/swift/object.ring.gz not found, probably it hasn't been written yet", > "Warning: Unexpected line: Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta", > "Warning: Unexpected line: There are no devices in this ring, or all devices have been deleted", > "Warning: Unexpected line: Ring file /etc/swift/container.ring.gz not found, probably it hasn't been written yet", > "Warning: Unexpected line: Ring file /etc/swift/account.ring.gz not found, probably it hasn't been written yet", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ rsync_srcs+=' /var/www'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/swift_ringbuilder", > "++ stat -c %y /var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:23:37.214993659 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/swift_ringbuilder", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/swift_ringbuilder", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/swift_ringbuilder.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/swift_ringbuilder --mtime=1970-01-01", > "+ md5sum", > "+ awk '{print $1}'", > "tar: Removing leading `/' from member names", > "+ tar -c -f - /var/lib/config-data/puppet-generated/swift_ringbuilder --mtime=1970-01-01", > "2018-06-21 11:23:50,589 INFO: 36349 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-06-21 11:23:50,645 DEBUG: 36349 -- docker-puppet-swift_ringbuilder", > "2018-06-21 11:23:50,646 INFO: 36349 -- Finished processing puppet configs for swift_ringbuilder", > "2018-06-21 11:23:50,647 INFO: 36349 -- Starting configuration of sahara using image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-21 11:23:50,647 DEBUG: 36349 -- config_volume sahara", > "2018-06-21 11:23:50,647 DEBUG: 36349 -- puppet_tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-06-21 11:23:50,647 DEBUG: 36349 -- manifest include ::tripleo::profile::base::sahara::api", > "include ::tripleo::profile::base::sahara::engine", > "2018-06-21 11:23:50,647 DEBUG: 36349 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-21 11:23:50,647 DEBUG: 36349 -- volumes []", > "2018-06-21 11:23:50,647 INFO: 36349 -- Removing container: docker-puppet-sahara", > "2018-06-21 11:23:50,715 INFO: 36349 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-21 11:23:53,188 DEBUG: 36349 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-sahara-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-sahara-api", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "6c5f7e9a0fe8: Pulling fs layer", > "5f67eb984180: Pulling fs layer", > "5f67eb984180: Verifying Checksum", > "5f67eb984180: Download complete", > "6c5f7e9a0fe8: Verifying Checksum", > "6c5f7e9a0fe8: Download complete", > "6c5f7e9a0fe8: Pull complete", > "5f67eb984180: Pull complete", > "Digest: sha256:702a41a4d211978832441c041a232227b3d2484d71ef01a8bf7d5332091587a5", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-21 11:23:53,192 DEBUG: 36349 -- NET_HOST enabled", > "2018-06-21 11:23:53,192 DEBUG: 36349 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-sahara --env PUPPET_TAGS=file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template --env NAME=sahara --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpyXZ6Qb:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-21 11:23:56,023 DEBUG: 36350 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.00 seconds", > "Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{md5}9da85e58f3bd6c780ce76db603b7f028'", > "Notice: /Stage[main]/Apache::Mod::Mime_magic/File[mime_magic.conf]/ensure: defined content as '{md5}b258529b332429e2ff8344f726a95457'", > "Notice: /Stage[main]/Apache::Mod::Alias/File[alias.conf]/ensure: defined content as '{md5}983e865be85f5e0daaed7433db82995e'", > "Notice: /Stage[main]/Apache::Mod::Autoindex/File[autoindex.conf]/ensure: defined content as '{md5}2421a3c6df32c7e38c2a7a22afdf5728'", > "Notice: /Stage[main]/Apache::Mod::Deflate/File[deflate.conf]/ensure: defined content as '{md5}a045d750d819b1e9dae3fbfb3f20edd5'", > "Notice: /Stage[main]/Apache::Mod::Dir/File[dir.conf]/ensure: defined content as '{md5}c741d8ea840e6eb999d739eed47c69d7'", > "Notice: /Stage[main]/Apache::Mod::Negotiation/File[negotiation.conf]/ensure: defined content as '{md5}47284b5580b986a6ba32580b6ffb9fd7'", > "Notice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{md5}c7ede4173da1915b7ec088201f030c28'", > "Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{md5}f58b0483b70b4e73b5f67ff37b8f24a0'", > "Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{md5}fa95c477a2085c1f7f17ee5f8eccfb90'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Gnocchi::Db/Gnocchi_config[indexer/url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Gnocchi_config[api/max_limit]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Gnocchi_config[api/auth_mode]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage/Gnocchi_config[storage/coordination_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Incoming::Redis/Gnocchi_config[incoming/driver]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Incoming::Redis/Gnocchi_config[incoming/redis_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/driver]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_username]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_keyring]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_pool]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_conffile]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Metricd/Gnocchi_config[metricd/workers]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Metricd/Gnocchi_config[metricd/metric_processing_delay]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/resource_id]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/archive_policy_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/flush_delay]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Logging/Oslo::Log[gnocchi_config]/Gnocchi_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Logging/Oslo::Log[gnocchi_config]/Gnocchi_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Policy/Oslo::Policy[gnocchi_config]/Gnocchi_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Oslo::Middleware[gnocchi_config]/Gnocchi_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}3cb292a5545de9f30e5168d05f41a649'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{md5}c6d1bc1fdbcb93bbd2596e4703f4108c' to '{md5}ac42062d69afa9d2671492ce0be87b7b'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{md5}785d35cb285e190d589163b45263ca89'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{md5}26e5d44aae258b3e9d821cbbbd3e2826'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{md5}0e8468ecc1265f8947b8725f4d1be9c0'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{md5}d1045f54d2798499ca0f030ca0eef920'", > "Notice: /Stage[main]/Apache::Mod::Actions/Apache::Mod[actions]/File[actions.load]/ensure: defined content as '{md5}599866dfaf734f60f7e2d41ee8235515'", > "Notice: /Stage[main]/Apache::Mod::Authn_core/Apache::Mod[authn_core]/File[authn_core.load]/ensure: defined content as '{md5}704d6e8b02b0eca0eba4083960d16c52'", > "Notice: /Stage[main]/Apache::Mod::Cache/Apache::Mod[cache]/File[cache.load]/ensure: defined content as '{md5}01e4d392225b518a65b0f7d6c4e21d29'", > "Notice: /Stage[main]/Apache::Mod::Ext_filter/Apache::Mod[ext_filter]/File[ext_filter.load]/ensure: defined content as '{md5}76d5e0ac3411a4be57ac33ebe2e52ac8'", > "Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{md5}e36257b9efab01459141d423cae57c7c'", > "Notice: /Stage[main]/Apache::Mod::Mime_magic/Apache::Mod[mime_magic]/File[mime_magic.load]/ensure: defined content as '{md5}cb8670bb2fb352aac7ebf3a85d52094c'", > "Notice: /Stage[main]/Apache::Mod::Rewrite/Apache::Mod[rewrite]/File[rewrite.load]/ensure: defined content as '{md5}26e2683352fc1599f29573ff0d934e79'", > "Notice: /Stage[main]/Apache::Mod::Speling/Apache::Mod[speling]/File[speling.load]/ensure: defined content as '{md5}f82e9e6b871a276c324c9eeffcec8a61'", > "Notice: /Stage[main]/Apache::Mod::Suexec/Apache::Mod[suexec]/File[suexec.load]/ensure: defined content as '{md5}c7d5c61c534ba423a79b0ae78ff9be35'", > "Notice: /Stage[main]/Apache::Mod::Version/Apache::Mod[version]/File[version.load]/ensure: defined content as '{md5}1c9243de22ace4dc8266442c48ae0c92'", > "Notice: /Stage[main]/Apache::Mod::Vhost_alias/Apache::Mod[vhost_alias]/File[vhost_alias.load]/ensure: defined content as '{md5}eca907865997d50d5130497665c3f82e'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_digest]/File[auth_digest.load]/ensure: defined content as '{md5}df9e85f8da0b239fe8e698ae7ead4f60'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_anon]/File[authn_anon.load]/ensure: defined content as '{md5}bf57b94b5aec35476fc2a2dc3861f132'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_dbm]/File[authn_dbm.load]/ensure: defined content as '{md5}90ee8f8ef1a017cacadfda4225e10651'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_dbm]/File[authz_dbm.load]/ensure: defined content as '{md5}c1363277984d22f99b70f7dce8753b60'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_owner]/File[authz_owner.load]/ensure: defined content as '{md5}f30a9be1016df87f195449d9e02d1857'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[expires]/File[expires.load]/ensure: defined content as '{md5}f0825bad1e470de86ffabeb86dcc5d95'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[include]/File[include.load]/ensure: defined content as '{md5}88095a914eedc3c2c184dd5d74c3954c'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[logio]/File[logio.load]/ensure: defined content as '{md5}084533c7a44e9129d0e6df952e2472b6'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[substitute]/File[substitute.load]/ensure: defined content as '{md5}8077c34a71afcf41c8fc644830935915'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[usertrack]/File[usertrack.load]/ensure: defined content as '{md5}e95fbbf030fabec98b948f8dc217775c'", > "Notice: /Stage[main]/Apache::Mod::Alias/Apache::Mod[alias]/File[alias.load]/ensure: defined content as '{md5}3cf2fa309ccae4c29a4b875d0894cd79'", > "Notice: /Stage[main]/Apache::Mod::Authn_file/Apache::Mod[authn_file]/File[authn_file.load]/ensure: defined content as '{md5}d41656680003d7b890267bb73621c60b'", > "Notice: /Stage[main]/Apache::Mod::Autoindex/Apache::Mod[autoindex]/File[autoindex.load]/ensure: defined content as '{md5}515cdf5b573e961a60d2931d39248648'", > "Notice: /Stage[main]/Apache::Mod::Dav/Apache::Mod[dav]/File[dav.load]/ensure: defined content as '{md5}588e496251838c4840c14b28b5aa7881'", > "Notice: /Stage[main]/Apache::Mod::Dav_fs/File[dav_fs.conf]/ensure: defined content as '{md5}899a57534f3d84efa81887ec93c90c9b'", > "Notice: /Stage[main]/Apache::Mod::Dav_fs/Apache::Mod[dav_fs]/File[dav_fs.load]/ensure: defined content as '{md5}2996277c73b1cd684a9a3111c355e0d3'", > "Notice: /Stage[main]/Apache::Mod::Deflate/Apache::Mod[deflate]/File[deflate.load]/ensure: defined content as '{md5}2d1a1afcae0c70557251829a8586eeaf'", > "Notice: /Stage[main]/Apache::Mod::Dir/Apache::Mod[dir]/File[dir.load]/ensure: defined content as '{md5}1bfb1c2a46d7351fc9eb47c659dee068'", > "Notice: /Stage[main]/Apache::Mod::Negotiation/Apache::Mod[negotiation]/File[negotiation.load]/ensure: defined content as '{md5}d262ee6a5f20d9dd7f87770638dc2ccd'", > "Notice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{md5}ec6c99f7cc8e35bdbcf8028f652c9f6d'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_basic]/File[auth_basic.load]/ensure: defined content as '{md5}494bcf4b843f7908675d663d8dc1bdc8'", > "Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{md5}66a1e2064a140c3e7dca7ac33877700e'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{md5}39942569bff2abdb259f9a347c7246bc'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[access_compat]/File[access_compat.load]/ensure: defined content as '{md5}d5feb88bec4570e2dbc41cce7e0de003'", > "Notice: /Stage[main]/Apache::Mod::Authz_user/Apache::Mod[authz_user]/File[authz_user.load]/ensure: defined content as '{md5}63594303ee808423679b1ea13dd5a784'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_groupfile]/File[authz_groupfile.load]/ensure: defined content as '{md5}ae005a36b3ac8c20af36c434561c8a75'", > "Notice: /Stage[main]/Apache::Mod::Env/Apache::Mod[env]/File[env.load]/ensure: defined content as '{md5}d74184d40d0ee24ba02626a188ee7e1a'", > "Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{md5}157529aafcf03fa491bc924103e4608e'", > "Notice: /Stage[main]/Apache::Mod::Cgi/Apache::Mod[cgi]/File[cgi.load]/ensure: defined content as '{md5}ac20c5c5779b37ab06b480d6485a0881'", > "Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{md5}c7726ef20347ef9a06ef68eeaad79765'", > "Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{md5}e282ac9f82fe5538692a4de3616fb695'", > "Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{md5}ab31a6ea611785f74851b578572e4157'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Apache/Systemd::Dropin_file[httpd.conf]/File[/etc/systemd/system/httpd.service.d]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Apache/Systemd::Dropin_file[httpd.conf]/File[/etc/systemd/system/httpd.service.d/httpd.conf]/ensure: defined content as '{md5}c44e90292b030f86c3b82096b68fe9cc'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed", > "Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/content: content changed '{md5}9e163ce201541f8aa36fcc1a372ed34d' to '{md5}b6f6f2773db25c777f1db887e7a3f57d'", > "Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{md5}8b3feb3fc2563de439920bb2c52cbd11'", > "Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{md5}e1795e051e7aae1f865fde0d3b86a507'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-lua.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi.conf]/ensure: removed", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/File[/var/www/cgi-bin/gnocchi]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/File[gnocchi_wsgi]/ensure: defined content as '{md5}c03530dd30d25ec70b705e0c2f43df7a'", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/Apache::Vhost[gnocchi_wsgi]/Concat[10-gnocchi_wsgi.conf]/File[/etc/httpd/conf.d/10-gnocchi_wsgi.conf]/ensure: defined content as '{md5}1524f118b98bfea9814025b4dfb8fc4a'", > "Notice: Applied catalog in 1.11 seconds", > " Total: 110", > " Success: 110", > " Changed: 110", > " Out of sync: 110", > " Total: 253", > " Skipped: 42", > " Concat file: 0.00", > " Anchor: 0.00", > " Concat fragment: 0.00", > " Augeas: 0.02", > " Gnocchi config: 0.27", > " File: 0.27", > " Last run: 1529580234", > " Config retrieval: 4.55", > " Total: 5.12", > " Resources: 0.00", > " Config: 1529580228", > "Gathering files modified after 2018-06-21 11:23:43.855045391 +0000", > "2018-06-21 11:23:56,023 DEBUG: 36350 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config'", > "+ origin_of_time=/var/lib/config-data/gnocchi.origin_of_time", > "+ touch /var/lib/config-data/gnocchi.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config /etc/config.pp", > "Warning: ModuleLoader: module 'gnocchi' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/gnocchi/manifests/db.pp\", 26]:[\"/etc/puppet/modules/gnocchi/manifests/init.pp\", 54]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/gnocchi/manifests/config.pp\", 29]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/gnocchi.pp\", 31]", > "Warning: Scope(Class[Gnocchi::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: ModuleLoader: module 'oslo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'keystone' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'openstacklib' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/gnocchi", > "++ stat -c %y /var/lib/config-data/gnocchi.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:23:43.855045391 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/gnocchi", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/gnocchi", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/gnocchi.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/gnocchi --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/gnocchi --mtime=1970-01-01", > "2018-06-21 11:23:56,023 INFO: 36350 -- Removing container: docker-puppet-gnocchi", > "2018-06-21 11:23:56,068 DEBUG: 36350 -- docker-puppet-gnocchi", > "2018-06-21 11:23:56,068 INFO: 36350 -- Finished processing puppet configs for gnocchi", > "2018-06-21 11:23:56,068 INFO: 36350 -- Starting configuration of clustercheck using image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-21 11:23:56,069 DEBUG: 36350 -- config_volume clustercheck", > "2018-06-21 11:23:56,069 DEBUG: 36350 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-06-21 11:23:56,069 DEBUG: 36350 -- manifest include ::tripleo::profile::pacemaker::clustercheck", > "2018-06-21 11:23:56,069 DEBUG: 36350 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-21 11:23:56,069 DEBUG: 36350 -- volumes []", > "2018-06-21 11:23:56,069 INFO: 36350 -- Removing container: docker-puppet-clustercheck", > "2018-06-21 11:23:56,133 INFO: 36350 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-21 11:24:00,370 DEBUG: 36348 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.24 seconds", > "Notice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/image_service]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[cinder/catalog_info]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[os_vif_linux_bridge/use_ipv6]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_api_faults]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/os_interface]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/memcached_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}37ed0de7c9ebb4682f22584b78bf1bc4'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/File[/etc/httpd/conf.d/00-nova-placement-api.conf]/content: content changed '{md5}611e31d39e1635bfabc0aafc51b43d0b' to '{md5}612d455490cfecc4b51db6656ea39240'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[placement_wsgi]/ensure: defined content as '{md5}2c992c50344eb1765282cb9fb70126db'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/Apache::Vhost[placement_wsgi]/Concat[10-placement_wsgi.conf]/File[/etc/httpd/conf.d/10-placement_wsgi.conf]/ensure: defined content as '{md5}0736aa6e5e26bedfe11b9ef7e39d7b59'", > "Notice: Applied catalog in 7.11 seconds", > " Total: 132", > " Success: 132", > " Changed: 132", > " Out of sync: 132", > " Total: 371", > " Skipped: 39", > " Augeas: 0.03", > " Package: 0.10", > " File: 0.51", > " Total: 11.45", > " Last run: 1529580238", > " Config retrieval: 4.91", > " Nova config: 5.90", > " Config: 1529580226", > "Gathering files modified after 2018-06-21 11:23:40.980023159 +0000", > "2018-06-21 11:24:00,371 DEBUG: 36348 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config'", > "+ origin_of_time=/var/lib/config-data/nova_placement.origin_of_time", > "+ touch /var/lib/config-data/nova_placement.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config /etc/config.pp", > "ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/placement.pp\", 62]", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/config.pp\", 37]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 114]", > "Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release", > "Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/db.pp\", 126]:[\"/etc/puppet/modules/nova/manifests/init.pp\", 530]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 533]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/placement.pp\", 62]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/placement.pp\", 101]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 138]", > "Warning: Scope(Class[Nova::Placement]): The os_region_name parameter is deprecated and will be removed \\", > "in a future release. Please use region_name instead.", > "Warning: Scope(Class[Nova::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/nova_placement", > "++ stat -c %y /var/lib/config-data/nova_placement.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:23:40.980023159 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova_placement", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova_placement", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/nova_placement.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/nova_placement --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/nova_placement --mtime=1970-01-01", > "2018-06-21 11:24:00,371 INFO: 36348 -- Removing container: docker-puppet-nova_placement", > "2018-06-21 11:24:00,422 DEBUG: 36348 -- docker-puppet-nova_placement", > "2018-06-21 11:24:00,422 INFO: 36348 -- Finished processing puppet configs for nova_placement", > "2018-06-21 11:24:00,422 INFO: 36348 -- Starting configuration of aodh using image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-21 11:24:00,422 DEBUG: 36348 -- config_volume aodh", > "2018-06-21 11:24:00,423 DEBUG: 36348 -- puppet_tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config", > "2018-06-21 11:24:00,423 DEBUG: 36348 -- manifest include tripleo::profile::base::aodh::api", > "include tripleo::profile::base::aodh::evaluator", > "include tripleo::profile::base::aodh::listener", > "include tripleo::profile::base::aodh::notifier", > "2018-06-21 11:24:00,423 DEBUG: 36348 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-21 11:24:00,423 DEBUG: 36348 -- volumes []", > "2018-06-21 11:24:00,423 INFO: 36348 -- Removing container: docker-puppet-aodh", > "2018-06-21 11:24:00,484 INFO: 36348 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-21 11:24:02,338 DEBUG: 36350 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-mariadb ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-mariadb", > "2ee1f6a99b58: Pulling fs layer", > "2ee1f6a99b58: Verifying Checksum", > "2ee1f6a99b58: Download complete", > "2ee1f6a99b58: Pull complete", > "Digest: sha256:2a886d2154594b405341b26bdc272a2796459d288a4fde8b2ee6f5ca253f6792", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-21 11:24:02,342 DEBUG: 36350 -- NET_HOST enabled", > "2018-06-21 11:24:02,342 DEBUG: 36350 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-clustercheck --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=clustercheck --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpys3Uda:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-21 11:24:03,273 DEBUG: 36348 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-api", > "cb7d08d4cc0c: Pulling fs layer", > "6e57c8911d7b: Pulling fs layer", > "6e57c8911d7b: Verifying Checksum", > "6e57c8911d7b: Download complete", > "cb7d08d4cc0c: Verifying Checksum", > "cb7d08d4cc0c: Download complete", > "cb7d08d4cc0c: Pull complete", > "6e57c8911d7b: Pull complete", > "Digest: sha256:fa189b1bb39e6c29a0fe5a6e824ae0f89206ba6749e373e719edac2129e0ff6b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-21 11:24:03,277 DEBUG: 36348 -- NET_HOST enabled", > "2018-06-21 11:24:03,277 DEBUG: 36348 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-aodh --env PUPPET_TAGS=file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config --env NAME=aodh --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpNhqnOG:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-21 11:24:03,880 DEBUG: 36349 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.14 seconds", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/plugins]/ensure: created", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/port]/ensure: created", > "Notice: /Stage[main]/Sahara::Service::Api/Sahara_config[DEFAULT/api_workers]/ensure: created", > "Notice: /Stage[main]/Sahara::Logging/Oslo::Log[sahara_config]/Sahara_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Sahara::Logging/Oslo::Log[sahara_config]/Sahara_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Sahara::Policy/Oslo::Policy[sahara_config]/Sahara_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Default[sahara_config]/Sahara_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Rabbit[sahara_config]/Sahara_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Zmq[sahara_config]/Sahara_config[DEFAULT/rpc_zmq_host]/ensure: created", > "Notice: /Stage[main]/Sahara::Notify/Oslo::Messaging::Notifications[sahara_config]/Sahara_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Sahara::Notify/Oslo::Messaging::Notifications[sahara_config]/Sahara_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: Applied catalog in 1.34 seconds", > " Total: 25", > " Success: 25", > " Total: 196", > " Skipped: 23", > " Out of sync: 25", > " Changed: 25", > " Package: 0.05", > " Sahara config: 0.98", > " Last run: 1529580242", > " Config retrieval: 2.42", > " Total: 3.48", > " Config: 1529580238", > "Gathering files modified after 2018-06-21 11:23:53.398117427 +0000", > "2018-06-21 11:24:03,880 DEBUG: 36349 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template'", > "+ origin_of_time=/var/lib/config-data/sahara.origin_of_time", > "+ touch /var/lib/config-data/sahara.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template /etc/config.pp", > "Warning: ModuleLoader: module 'sahara' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/sahara/manifests/db.pp\", 69]:[\"/etc/puppet/modules/sahara/manifests/init.pp\", 380]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/sahara/manifests/policy.pp\", 34]:[\"/etc/puppet/modules/sahara/manifests/init.pp\", 381]", > "Warning: Scope(Class[Sahara]): The use_neutron parameter has been deprecated and will be removed in the future release.", > "Warning: Scope(Class[Sahara]): sahara::admin_user, sahara::admin_password, sahara::auth_uri, sahara::identity_uri, sahara::admin_tenant_name and sahara::memcached_servers are deprecated. Please use sahara::keystone::authtoken::* parameters instead.", > "Warning: Scope(Class[Sahara::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/sahara", > "++ stat -c %y /var/lib/config-data/sahara.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:23:53.398117427 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/sahara", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/sahara", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/sahara.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/sahara --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/sahara --mtime=1970-01-01", > "2018-06-21 11:24:03,880 INFO: 36349 -- Removing container: docker-puppet-sahara", > "2018-06-21 11:24:03,918 DEBUG: 36349 -- docker-puppet-sahara", > "2018-06-21 11:24:03,918 INFO: 36349 -- Finished processing puppet configs for sahara", > "2018-06-21 11:24:03,919 INFO: 36349 -- Starting configuration of mysql using image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-21 11:24:03,919 DEBUG: 36349 -- config_volume mysql", > "2018-06-21 11:24:03,919 DEBUG: 36349 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-06-21 11:24:03,919 DEBUG: 36349 -- manifest ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }", > "2018-06-21 11:24:03,919 DEBUG: 36349 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-21 11:24:03,919 DEBUG: 36349 -- volumes []", > "2018-06-21 11:24:03,920 INFO: 36349 -- Removing container: docker-puppet-mysql", > "2018-06-21 11:24:03,970 INFO: 36349 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-21 11:24:03,974 DEBUG: 36349 -- NET_HOST enabled", > "2018-06-21 11:24:03,974 DEBUG: 36349 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-mysql --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=mysql --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpxo8ixt:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-21 11:24:09,146 DEBUG: 36350 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.42 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}5b8acaa58a90d174e15437cd06a5f6f1'", > "Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/content: content changed '{md5}9ff8cc688dd9f0dfc45e5afd25c427a7' to '{md5}7d37008224e71625019cb48768f267e7'", > "Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/mode: mode changed '0600' to '0644'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/Xinetd::Service[galera-monitor]/File[/etc/xinetd.d/galera-monitor]/ensure: defined content as '{md5}3afdef3c0450b1869412e40a88f2bfb2'", > "Notice: Applied catalog in 0.04 seconds", > " Total: 4", > " Success: 4", > " Total: 13", > " Out of sync: 3", > " Changed: 3", > " Skipped: 9", > " File: 0.02", > " Config retrieval: 0.56", > " Total: 0.58", > " Last run: 1529580248", > " Config: 1529580247", > "Gathering files modified after 2018-06-21 11:24:02.522183837 +0000", > "2018-06-21 11:24:09,146 DEBUG: 36350 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,file ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,file'", > "+ origin_of_time=/var/lib/config-data/clustercheck.origin_of_time", > "+ touch /var/lib/config-data/clustercheck.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,file /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/clustercheck", > "++ stat -c %y /var/lib/config-data/clustercheck.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:24:02.522183837 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/clustercheck", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/clustercheck", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/clustercheck.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/clustercheck --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/clustercheck --mtime=1970-01-01", > "2018-06-21 11:24:09,146 INFO: 36350 -- Removing container: docker-puppet-clustercheck", > "2018-06-21 11:24:09,184 DEBUG: 36350 -- docker-puppet-clustercheck", > "2018-06-21 11:24:09,185 INFO: 36350 -- Finished processing puppet configs for clustercheck", > "2018-06-21 11:24:09,185 INFO: 36350 -- Starting configuration of redis using image 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-21 11:24:09,185 DEBUG: 36350 -- config_volume redis", > "2018-06-21 11:24:09,185 DEBUG: 36350 -- puppet_tags file,file_line,concat,augeas,cron,exec", > "2018-06-21 11:24:09,185 DEBUG: 36350 -- manifest include ::tripleo::profile::pacemaker::database::redis_bundle", > "2018-06-21 11:24:09,185 DEBUG: 36350 -- config_image 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-21 11:24:09,185 DEBUG: 36350 -- volumes []", > "2018-06-21 11:24:09,186 INFO: 36350 -- Removing container: docker-puppet-redis", > "2018-06-21 11:24:09,251 INFO: 36350 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-21 11:24:12,707 DEBUG: 36350 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-redis ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-redis", > "13055d264df1: Pulling fs layer", > "dfc35b833f61: Pulling fs layer", > "13055d264df1: Download complete", > "13055d264df1: Pull complete", > "dfc35b833f61: Verifying Checksum", > "dfc35b833f61: Download complete", > "dfc35b833f61: Pull complete", > "Digest: sha256:7782f917270ad46f451fe06063a6adb53afe9d81474a7af374ed7b9c09d1b055", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-21 11:24:12,711 DEBUG: 36350 -- NET_HOST enabled", > "2018-06-21 11:24:12,711 DEBUG: 36350 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-redis --env PUPPET_TAGS=file,file_line,concat,augeas,cron,exec --env NAME=redis --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpGVqJrp:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-21 11:24:14,876 DEBUG: 36349 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.25 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/ensure: defined content as '{md5}e51811cf726fa3e6a5a924a379dc5198'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}5a169246460baf3e552027b0f5e8a1f8'", > "Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: content changed '{md5}af90358207ccfecae7af249d5ef7dd3e' to '{md5}da920df6baf6c7424ed796c11086927e'", > "Notice: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Tripleo::Pacemaker::Resource_restart_flag[galera-master]/File[/var/lib/tripleo]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Tripleo::Pacemaker::Resource_restart_flag[galera-master]/File[/var/lib/tripleo/pacemaker-restarts]/ensure: created", > "Notice: Applied catalog in 0.41 seconds", > " Total: 6", > " Success: 6", > " Skipped: 226", > " Total: 233", > " Out of sync: 6", > " Changed: 6", > " File: 0.04", > " Last run: 1529580253", > " Config retrieval: 4.70", > " Total: 4.74", > " Config: 1529580248", > "Gathering files modified after 2018-06-21 11:24:04.166195556 +0000", > "2018-06-21 11:24:14,877 DEBUG: 36349 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/mysql.origin_of_time", > "+ touch /var/lib/config-data/mysql.origin_of_time", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/mysql_bundle.pp\", 133]:[\"/etc/config.pp\", 4]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 103]:[\"/etc/config.pp\", 4]", > "Warning: ModuleLoader: module 'aodh' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/db/mysql.pp\", 58]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 175]", > "Warning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'glance' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'heat' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'panko' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/openstacklib/manifests/db/mysql/host_access.pp\", 43]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/mysql", > "++ stat -c %y /var/lib/config-data/mysql.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:24:04.166195556 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/mysql", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/mysql", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/mysql.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/mysql --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/mysql --mtime=1970-01-01", > "2018-06-21 11:24:14,877 INFO: 36349 -- Removing container: docker-puppet-mysql", > "2018-06-21 11:24:14,915 DEBUG: 36349 -- docker-puppet-mysql", > "2018-06-21 11:24:14,916 INFO: 36349 -- Finished processing puppet configs for mysql", > "2018-06-21 11:24:14,916 INFO: 36349 -- Starting configuration of nova using image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-21 11:24:14,916 DEBUG: 36349 -- config_volume nova", > "2018-06-21 11:24:14,916 DEBUG: 36349 -- puppet_tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config", > "2018-06-21 11:24:14,916 DEBUG: 36349 -- manifest ['Nova_cell_v2'].each |String $val| { noop_resource($val) }", > "include tripleo::profile::base::nova::conductor", > "include tripleo::profile::base::nova::consoleauth", > "include tripleo::profile::base::nova::scheduler", > "include tripleo::profile::base::nova::vncproxy", > "2018-06-21 11:24:14,916 DEBUG: 36349 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-21 11:24:14,916 DEBUG: 36349 -- volumes []", > "2018-06-21 11:24:14,917 INFO: 36349 -- Removing container: docker-puppet-nova", > "2018-06-21 11:24:14,984 INFO: 36349 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-21 11:24:16,325 DEBUG: 36349 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-api", > "0e3031608420: Already exists", > "b32f33ab1345: Pulling fs layer", > "b32f33ab1345: Verifying Checksum", > "b32f33ab1345: Download complete", > "b32f33ab1345: Pull complete", > "Digest: sha256:98f38e1deb6081bcc8d18a914af693593a06823741381f71dacd158824ef18f8", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-21 11:24:16,328 DEBUG: 36349 -- NET_HOST enabled", > "2018-06-21 11:24:16,328 DEBUG: 36349 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config --env NAME=nova --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpveztwp:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-21 11:24:16,603 DEBUG: 36348 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.80 seconds", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Aodh_config[api/gnocchi_external_project_owner]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Aodh_config[api/host]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Aodh_config[api/port]/ensure: created", > "Notice: /Stage[main]/Aodh::Evaluator/Aodh_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Db/Oslo::Db[aodh_config]/Aodh_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Aodh::Logging/Oslo::Log[aodh_config]/Aodh_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Aodh::Logging/Oslo::Log[aodh_config]/Aodh_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Rabbit[aodh_config]/Aodh_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Default[aodh_config]/Aodh_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Notifications[aodh_config]/Aodh_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Notifications[aodh_config]/Aodh_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Policy/Oslo::Policy[aodh_config]/Aodh_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Oslo::Middleware[aodh_config]/Aodh_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}fc316e9d923e3a94945cfb8c64307e1d'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[/var/www/cgi-bin/aodh]/owner: owner changed 'root' to 'aodh'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[/var/www/cgi-bin/aodh]/group: group changed 'root' to 'aodh'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[aodh_wsgi]/ensure: defined content as '{md5}09d823939c45501c11f2096289fe70cf'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/Apache::Vhost[aodh_wsgi]/Concat[10-aodh_wsgi.conf]/File[/etc/httpd/conf.d/10-aodh_wsgi.conf]/ensure: defined content as '{md5}3a5e55367f0144775f4f683dd00c98a7'", > "Notice: Applied catalog in 2.33 seconds", > " Total: 112", > " Success: 112", > " Changed: 111", > " Out of sync: 111", > " Total: 331", > " Skipped: 40", > " Package: 0.04", > " File: 0.43", > " Aodh config: 1.00", > " Last run: 1529580254", > " Config retrieval: 4.44", > " Total: 5.94", > "Gathering files modified after 2018-06-21 11:24:03.486190717 +0000", > "2018-06-21 11:24:16,603 DEBUG: 36348 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config'", > "+ origin_of_time=/var/lib/config-data/aodh.origin_of_time", > "+ touch /var/lib/config-data/aodh.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/aodh.pp\", 123]", > "Warning: Scope(Class[Aodh::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/oslo/manifests/db.pp\", 140]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/aodh", > "++ stat -c %y /var/lib/config-data/aodh.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:24:03.486190717 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/aodh", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/aodh", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/aodh.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/aodh --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/aodh --mtime=1970-01-01", > "2018-06-21 11:24:16,603 INFO: 36348 -- Removing container: docker-puppet-aodh", > "2018-06-21 11:24:16,826 DEBUG: 36348 -- docker-puppet-aodh", > "2018-06-21 11:24:16,827 INFO: 36348 -- Finished processing puppet configs for aodh", > "2018-06-21 11:24:16,827 INFO: 36348 -- Starting configuration of heat_api using image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-21 11:24:16,827 DEBUG: 36348 -- config_volume heat_api", > "2018-06-21 11:24:16,827 DEBUG: 36348 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-06-21 11:24:16,827 DEBUG: 36348 -- manifest include ::tripleo::profile::base::heat::api", > "2018-06-21 11:24:16,827 DEBUG: 36348 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-21 11:24:16,827 DEBUG: 36348 -- volumes []", > "2018-06-21 11:24:16,828 INFO: 36348 -- Removing container: docker-puppet-heat_api", > "2018-06-21 11:24:16,889 INFO: 36348 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-21 11:24:19,053 DEBUG: 36348 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-api", > "15497368e843: Pulling fs layer", > "a91507f6d5dc: Pulling fs layer", > "a91507f6d5dc: Verifying Checksum", > "a91507f6d5dc: Download complete", > "15497368e843: Verifying Checksum", > "15497368e843: Download complete", > "15497368e843: Pull complete", > "a91507f6d5dc: Pull complete", > "Digest: sha256:7e8eb4cb5943296bd67f2e22c40a7519d3c71f8533541c54da0c9f5ef6b361ce", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-21 11:24:19,057 DEBUG: 36348 -- NET_HOST enabled", > "2018-06-21 11:24:19,057 DEBUG: 36348 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat_api --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp02T7lc:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-21 11:24:20,133 DEBUG: 36350 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.05 seconds", > "Notice: /Stage[main]/Redis::Config/File[/etc/redis]/ensure: created", > "Notice: /Stage[main]/Redis::Config/File[/var/log/redis]/mode: mode changed '0750' to '0755'", > "Notice: /Stage[main]/Redis::Config/File[/var/lib/redis]/mode: mode changed '0750' to '0755'", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/security/limits.d/redis.conf]/ensure: defined content as '{md5}a2f723773964f5ea42b6c7c5d6b72208'", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]/mode: mode changed '0644' to '0444'", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]/ensure: defined content as '{md5}94de54ece28c930b89fefe1be0a08a8f'", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Triggered 'refresh' from 1 events", > "Notice: Applied catalog in 0.06 seconds", > " Restarted: 1", > " Skipped: 11", > " Total: 21", > " Exec: 0.00", > " Augeas: 0.01", > " File: 0.01", > " Config retrieval: 1.24", > " Total: 1.26", > " Last run: 1529580259", > " Config: 1529580257", > "Gathering files modified after 2018-06-21 11:24:12.908256612 +0000", > "2018-06-21 11:24:20,133 DEBUG: 36350 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,exec ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,exec'", > "+ origin_of_time=/var/lib/config-data/redis.origin_of_time", > "+ touch /var/lib/config-data/redis.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,exec /etc/config.pp", > "Warning: ModuleLoader: module 'redis' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/redis", > "++ stat -c %y /var/lib/config-data/redis.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:24:12.908256612 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/redis", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/redis.origin_of_time -not -path '/etc/puppet*' -print0", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/redis", > "+ tar -c -f - /var/lib/config-data/redis --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/redis --mtime=1970-01-01", > "2018-06-21 11:24:20,133 INFO: 36350 -- Removing container: docker-puppet-redis", > "2018-06-21 11:24:20,172 DEBUG: 36350 -- docker-puppet-redis", > "2018-06-21 11:24:20,173 INFO: 36350 -- Finished processing puppet configs for redis", > "2018-06-21 11:24:20,173 INFO: 36350 -- Starting configuration of keystone using image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-21 11:24:20,173 DEBUG: 36350 -- config_volume keystone", > "2018-06-21 11:24:20,173 DEBUG: 36350 -- puppet_tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config", > "2018-06-21 11:24:20,173 DEBUG: 36350 -- manifest ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }", > "2018-06-21 11:24:20,173 DEBUG: 36350 -- config_image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-21 11:24:20,174 DEBUG: 36350 -- volumes []", > "2018-06-21 11:24:20,174 INFO: 36350 -- Removing container: docker-puppet-keystone", > "2018-06-21 11:24:20,242 INFO: 36350 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-21 11:24:22,612 DEBUG: 36350 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-keystone ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-keystone", > "6222a19b9ac2: Pulling fs layer", > "900dd421e68b: Pulling fs layer", > "900dd421e68b: Verifying Checksum", > "900dd421e68b: Download complete", > "6222a19b9ac2: Verifying Checksum", > "6222a19b9ac2: Download complete", > "6222a19b9ac2: Pull complete", > "900dd421e68b: Pull complete", > "Digest: sha256:5aaa5a4237af74f89ed31c8ff7e97414693ecfb9ce82bcb13f238c1a96030dc5", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-21 11:24:22,615 DEBUG: 36350 -- NET_HOST enabled", > "2018-06-21 11:24:22,616 DEBUG: 36350 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-keystone --env PUPPET_TAGS=file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config --env NAME=keystone --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpifSOI6:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-21 11:24:32,020 DEBUG: 36348 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.58 seconds", > "Notice: /Stage[main]/Heat::Cron::Purge_deleted/Cron[heat-manage purge_deleted]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin_password]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/auth_type]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/auth_url]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/username]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/password]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[clients_keystone/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[DEFAULT/max_json_body_size]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[ec2authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[yaql/limit_iterators]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[yaql/memory_quota]/ensure: created", > "Notice: /Stage[main]/Heat::Api/Heat_config[heat_api/bind_host]/ensure: created", > "Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/rpc_response_timeout]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Middleware[heat_config]/Heat_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/expose_headers]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/max_age]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/allow_headers]/ensure: created", > "Notice: /Stage[main]/Heat::Policy/Oslo::Policy[heat_config]/Heat_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}0b4bad3c8a21111582786caceb3bc55a'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[heat_api_wsgi]/ensure: defined content as '{md5}640891728ce5d46ae40234228561597c'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/Apache::Vhost[heat_api_wsgi]/Concat[10-heat_api_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_wsgi.conf]/ensure: defined content as '{md5}e7b2b5d57d7b13197d33bbcc8ee73b93'", > "Notice: Applied catalog in 2.57 seconds", > " Total: 121", > " Success: 121", > " Changed: 121", > " Out of sync: 121", > " Skipped: 32", > " Total: 335", > " Cron: 0.01", > " File: 0.31", > " Heat config: 1.62", > " Last run: 1529580270", > " Config retrieval: 4.22", > " Total: 6.20", > " Config: 1529580263", > "Gathering files modified after 2018-06-21 11:24:19.243299572 +0000", > "2018-06-21 11:24:32,020 DEBUG: 36348 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,heat_config,file,concat,file_line ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line'", > "+ origin_of_time=/var/lib/config-data/heat_api.origin_of_time", > "+ touch /var/lib/config-data/heat_api.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/db.pp\", 75]:[\"/etc/puppet/modules/heat/manifests/init.pp\", 363]", > "Warning: Scope(Class[Heat::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/heat.pp\", 134]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat_api", > "++ stat -c %y /var/lib/config-data/heat_api.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:24:19.243299572 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat_api", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat_api", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat_api.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/heat_api --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/heat_api --mtime=1970-01-01", > "2018-06-21 11:24:32,021 INFO: 36348 -- Removing container: docker-puppet-heat_api", > "2018-06-21 11:24:32,063 DEBUG: 36348 -- docker-puppet-heat_api", > "2018-06-21 11:24:32,063 INFO: 36348 -- Finished processing puppet configs for heat_api", > "2018-06-21 11:24:32,064 INFO: 36348 -- Starting configuration of heat using image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-21 11:24:32,064 DEBUG: 36348 -- config_volume heat", > "2018-06-21 11:24:32,064 DEBUG: 36348 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-06-21 11:24:32,064 DEBUG: 36348 -- manifest include ::tripleo::profile::base::heat::engine", > "2018-06-21 11:24:32,064 DEBUG: 36348 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-21 11:24:32,065 DEBUG: 36348 -- volumes []", > "2018-06-21 11:24:32,065 INFO: 36348 -- Removing container: docker-puppet-heat", > "2018-06-21 11:24:32,111 INFO: 36348 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-21 11:24:32,114 DEBUG: 36348 -- NET_HOST enabled", > "2018-06-21 11:24:32,114 DEBUG: 36348 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpBwoaBz:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-21 11:24:35,489 DEBUG: 36350 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.70 seconds", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_token]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_bind_host]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_bind_host]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_port]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_port]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/expiration]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[ssl/enable]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[catalog/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[catalog/template_file]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/provider]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/notification_format]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[eventlet_server/admin_workers]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[eventlet_server/public_workers]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/0]/ensure: defined content as '{md5}3ddf048c6871705212f4baf1cfefd644'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/1]/ensure: defined content as '{md5}647fa860739b2fc2966edcf071d44bce'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/0]/ensure: defined content as '{md5}a5a47011b0d90d93073fccce60578ec1'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/1]/ensure: defined content as '{md5}eeabf96eb5042b89a83b6e200a9e1507'", > "Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/key_repository]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/revoke_by_id]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/max_active_keys]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[credential/key_repository]/ensure: created", > "Notice: /Stage[main]/Keystone::Config/Keystone_config[ec2/driver]/ensure: created", > "Notice: /Stage[main]/Keystone::Cron::Token_flush/Cron[keystone-manage token_flush]/ensure: created", > "Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Keystone::Policy/Oslo::Policy[keystone_config]/Keystone_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Middleware[keystone_config]/Keystone_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Default[keystone_config]/Keystone_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}aa40eeefa414cf0235029477fb28fba9'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_main]/File[keystone_wsgi_main]/ensure: defined content as '{md5}072422f0d75777ed1783e6910b3ddc58'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_admin]/File[keystone_wsgi_admin]/ensure: defined content as '{md5}d6dda52b0e14d80a652ecf42686d3962'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_gssapi.conf]/ensure: removed", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_main]/Apache::Vhost[keystone_wsgi_main]/Concat[10-keystone_wsgi_main.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_main.conf]/ensure: defined content as '{md5}653272cb76fd2943463a866083dbbfde'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_admin]/Apache::Vhost[keystone_wsgi_admin]/Concat[10-keystone_wsgi_admin.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_admin.conf]/ensure: defined content as '{md5}b82460ec44e6c9b3e569f0be298c5774'", > "Notice: Applied catalog in 2.59 seconds", > " Total: 122", > " Success: 122", > " Changed: 122", > " Out of sync: 122", > " Total: 320", > " Skipped: 34", > " File: 0.39", > " Keystone config: 1.53", > " Last run: 1529580274", > " Config retrieval: 4.29", > " Total: 6.29", > " Config: 1529580267", > "Gathering files modified after 2018-06-21 11:24:22.818323352 +0000", > "2018-06-21 11:24:35,489 DEBUG: 36350 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config'", > "+ origin_of_time=/var/lib/config-data/keystone.origin_of_time", > "+ touch /var/lib/config-data/keystone.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/policy.pp\", 34]:[\"/etc/puppet/modules/keystone/manifests/init.pp\", 757]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/init.pp\", 760]:[\"/etc/config.pp\", 3]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/init.pp\", 1108]:[\"/etc/config.pp\", 3]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/keystone", > "++ stat -c %y /var/lib/config-data/keystone.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:24:22.818323352 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/keystone", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/keystone", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/keystone.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/keystone --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/keystone --mtime=1970-01-01", > "2018-06-21 11:24:35,489 INFO: 36350 -- Removing container: docker-puppet-keystone", > "2018-06-21 11:24:35,553 DEBUG: 36350 -- docker-puppet-keystone", > "2018-06-21 11:24:35,553 INFO: 36350 -- Finished processing puppet configs for keystone", > "2018-06-21 11:24:35,553 INFO: 36350 -- Starting configuration of memcached using image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-21 11:24:35,553 DEBUG: 36350 -- config_volume memcached", > "2018-06-21 11:24:35,553 DEBUG: 36350 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-06-21 11:24:35,553 DEBUG: 36350 -- manifest include ::tripleo::profile::base::memcached", > "2018-06-21 11:24:35,553 DEBUG: 36350 -- config_image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-21 11:24:35,553 DEBUG: 36350 -- volumes []", > "2018-06-21 11:24:35,554 INFO: 36350 -- Removing container: docker-puppet-memcached", > "2018-06-21 11:24:35,610 INFO: 36350 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-21 11:24:36,936 DEBUG: 36350 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-memcached ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-memcached", > "ca902f72935a: Pulling fs layer", > "ca902f72935a: Verifying Checksum", > "ca902f72935a: Download complete", > "ca902f72935a: Pull complete", > "Digest: sha256:d1285a1e78900b5c0c58e5c03f624e46f6b871ff4ffa9d972ef012568a9f1046", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-21 11:24:36,939 DEBUG: 36350 -- NET_HOST enabled", > "2018-06-21 11:24:36,939 DEBUG: 36350 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-memcached --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=memcached --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpjCPflk:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-21 11:24:39,148 DEBUG: 36349 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.76 seconds", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}4f3bcbde7510fa19b7c63283a7470976'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[nova_api_wsgi]/ensure: defined content as '{md5}8bcfb466d72544dd31a4f339243ed669'", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/instance_name_template]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[wsgi/api_paste_config]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/enabled_apis]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen_port]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen_port]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_volume_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_workers]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_workers]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[api/use_forwarded_for]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[api/fping_path]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[neutron/service_metadata_proxy]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[neutron/metadata_proxy_shared_secret]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Conductor/Nova_config[conductor/workers]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/driver]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/discover_hosts_in_cells_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[scheduler/max_attempts]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/host_subset_size]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_io_ops_per_host]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_instances_per_host]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/weight_classes]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_host]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_port]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/auth_schemes]/ensure: created", > "Notice: /Stage[main]/Nova::Policy/Oslo::Policy[nova_config]/Nova_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Oslo::Middleware[nova_config]/Nova_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Nova::Cron::Archive_deleted_rows/Cron[nova-manage db archive_deleted_rows]/ensure: created", > "Notice: /Stage[main]/Nova::Cron::Purge_shadow_tables/Cron[nova-manage db purge]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/Apache::Vhost[nova_api_wsgi]/Concat[10-nova_api_wsgi.conf]/File[/etc/httpd/conf.d/10-nova_api_wsgi.conf]/ensure: defined content as '{md5}5fb7a8f737662544790610b5d8f92ceb'", > "Notice: Applied catalog in 10.03 seconds", > " Total: 180", > " Success: 180", > " Changed: 180", > " Out of sync: 180", > " Total: 501", > " Skipped: 75", > " Cron: 0.02", > " Package: 0.09", > " File: 0.32", > " Total: 14.75", > " Last run: 1529580276", > " Config retrieval: 5.53", > " Nova config: 8.77", > " Config: 1529580261", > "Gathering files modified after 2018-06-21 11:24:16.525281269 +0000", > "2018-06-21 11:24:39,149 DEBUG: 36349 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config'", > "+ origin_of_time=/var/lib/config-data/nova.origin_of_time", > "+ touch /var/lib/config-data/nova.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/api.pp\", 92]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 533]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/api.pp\", 92]", > "Warning: Unknown variable: '::nova::api::default_floating_pool'. at /etc/puppet/modules/nova/manifests/network/neutron.pp:112:38", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/nova/manifests/scheduler/filter.pp\", 150]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/scheduler.pp\", 32]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/nova", > "++ stat -c %y /var/lib/config-data/nova.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:24:16.525281269 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/nova.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/nova --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/nova --mtime=1970-01-01", > "2018-06-21 11:24:39,149 INFO: 36349 -- Removing container: docker-puppet-nova", > "2018-06-21 11:24:39,198 DEBUG: 36349 -- docker-puppet-nova", > "2018-06-21 11:24:39,198 INFO: 36349 -- Finished processing puppet configs for nova", > "2018-06-21 11:24:39,199 INFO: 36349 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-21 11:24:39,199 DEBUG: 36349 -- config_volume iscsid", > "2018-06-21 11:24:39,199 DEBUG: 36349 -- puppet_tags file,file_line,concat,augeas,cron,iscsid_config", > "2018-06-21 11:24:39,199 DEBUG: 36349 -- manifest include ::tripleo::profile::base::iscsid", > "2018-06-21 11:24:39,199 DEBUG: 36349 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-21 11:24:39,199 DEBUG: 36349 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-06-21 11:24:39,199 INFO: 36349 -- Removing container: docker-puppet-iscsid", > "2018-06-21 11:24:39,259 INFO: 36349 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-21 11:24:39,873 DEBUG: 36349 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-iscsid ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-iscsid", > "ab4eae34093d: Pulling fs layer", > "ab4eae34093d: Verifying Checksum", > "ab4eae34093d: Download complete", > "ab4eae34093d: Pull complete", > "Digest: sha256:a46aa93fee87b0f173118da5c2a18dc271772adb839a481ec07f2a53534ac53c", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-21 11:24:39,876 DEBUG: 36349 -- NET_HOST enabled", > "2018-06-21 11:24:39,876 DEBUG: 36349 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-iscsid --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpQfoGlw:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/iscsi:/etc/iscsi --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-21 11:24:42,614 DEBUG: 36348 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.09 seconds", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/auth_encryption_key]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_metadata_server_url]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_waitcondition_server_url]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_resources_per_stack]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/num_engine_workers]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/convergence_engine]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/reauthentication_auth_method]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_nested_stack_depth]/ensure: created", > "Notice: Applied catalog in 1.84 seconds", > " Total: 48", > " Success: 48", > " Skipped: 21", > " Total: 223", > " Out of sync: 48", > " Changed: 48", > " Heat config: 1.60", > " Last run: 1529580281", > " Config retrieval: 2.45", > " Total: 4.12", > " Config: 1529580277", > "Gathering files modified after 2018-06-21 11:24:32.300384837 +0000", > "2018-06-21 11:24:42,615 DEBUG: 36348 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/heat.origin_of_time", > "+ touch /var/lib/config-data/heat.origin_of_time", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat", > "++ stat -c %y /var/lib/config-data/heat.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:24:32.300384837 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/heat --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/heat --mtime=1970-01-01", > "2018-06-21 11:24:42,615 INFO: 36348 -- Removing container: docker-puppet-heat", > "2018-06-21 11:24:42,647 DEBUG: 36348 -- docker-puppet-heat", > "2018-06-21 11:24:42,647 INFO: 36348 -- Finished processing puppet configs for heat", > "2018-06-21 11:24:42,647 INFO: 36348 -- Starting configuration of cinder using image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-21 11:24:42,647 DEBUG: 36348 -- config_volume cinder", > "2018-06-21 11:24:42,647 DEBUG: 36348 -- puppet_tags file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line", > "2018-06-21 11:24:42,647 DEBUG: 36348 -- manifest include ::tripleo::profile::base::cinder::api", > "include ::tripleo::profile::base::cinder::backup::ceph", > "include ::tripleo::profile::base::cinder::scheduler", > "include ::tripleo::profile::base::lvm", > "2018-06-21 11:24:42,647 DEBUG: 36348 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-21 11:24:42,647 DEBUG: 36348 -- volumes []", > "2018-06-21 11:24:42,648 INFO: 36348 -- Removing container: docker-puppet-cinder", > "2018-06-21 11:24:42,709 INFO: 36348 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-21 11:24:42,964 DEBUG: 36350 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.59 seconds", > "Notice: /Stage[main]/Memcached/File[/etc/sysconfig/memcached]/content: content changed '{md5}a50ed62e82d31fb4cb2de2226650c545' to '{md5}b2122e2e949e073bd7247089cc6c41bf'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Memcached/Systemd::Dropin_file[memcached.conf]/File[/etc/systemd/system/memcached.service.d]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Memcached/Systemd::Dropin_file[memcached.conf]/File[/etc/systemd/system/memcached.service.d/memcached.conf]/ensure: defined content as '{md5}c44e90292b030f86c3b82096b68fe9cc'", > "Notice: Applied catalog in 0.10 seconds", > " Total: 3", > " Success: 3", > " Skipped: 10", > " File: 0.03", > " Config retrieval: 0.70", > " Total: 0.73", > " Last run: 1529580282", > " Config: 1529580281", > "Gathering files modified after 2018-06-21 11:24:37.141415363 +0000", > "2018-06-21 11:24:42,964 DEBUG: 36350 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/memcached.origin_of_time", > "+ touch /var/lib/config-data/memcached.origin_of_time", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/memcached", > "++ stat -c %y /var/lib/config-data/memcached.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:24:37.141415363 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/memcached", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/memcached", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/memcached.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/memcached --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/memcached --mtime=1970-01-01", > "2018-06-21 11:24:42,964 INFO: 36350 -- Removing container: docker-puppet-memcached", > "2018-06-21 11:24:43,003 DEBUG: 36350 -- docker-puppet-memcached", > "2018-06-21 11:24:43,004 INFO: 36350 -- Finished processing puppet configs for memcached", > "2018-06-21 11:24:43,004 INFO: 36350 -- Starting configuration of panko using image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-21 11:24:43,004 DEBUG: 36350 -- config_volume panko", > "2018-06-21 11:24:43,004 DEBUG: 36350 -- puppet_tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config", > "2018-06-21 11:24:43,004 DEBUG: 36350 -- manifest include tripleo::profile::base::panko::api", > "2018-06-21 11:24:43,004 DEBUG: 36350 -- config_image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-21 11:24:43,004 DEBUG: 36350 -- volumes []", > "2018-06-21 11:24:43,005 INFO: 36350 -- Removing container: docker-puppet-panko", > "2018-06-21 11:24:43,072 INFO: 36350 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-21 11:24:45,489 DEBUG: 36350 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-panko-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-panko-api", > "e67be68e6dd6: Pulling fs layer", > "37e4d86c7a37: Pulling fs layer", > "37e4d86c7a37: Verifying Checksum", > "37e4d86c7a37: Download complete", > "e67be68e6dd6: Verifying Checksum", > "e67be68e6dd6: Download complete", > "e67be68e6dd6: Pull complete", > "37e4d86c7a37: Pull complete", > "Digest: sha256:af7f2810620f1617a589387bcde33173bbf96ee4d0ea85e34d70bdfd83328d21", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-21 11:24:45,493 DEBUG: 36350 -- NET_HOST enabled", > "2018-06-21 11:24:45,493 DEBUG: 36350 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-panko --env PUPPET_TAGS=file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config --env NAME=panko --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpPhxEnH:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-21 11:24:45,858 DEBUG: 36349 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.49 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created", > " Total: 2", > " Success: 2", > " Total: 10", > " Out of sync: 2", > " Changed: 2", > " Skipped: 8", > " Exec: 0.02", > " Config retrieval: 0.62", > " Total: 0.64", > " Last run: 1529580285", > " Config: 1529580284", > "Gathering files modified after 2018-06-21 11:24:40.063433512 +0000", > "2018-06-21 11:24:45,859 DEBUG: 36349 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,iscsid_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,iscsid_config'", > "+ origin_of_time=/var/lib/config-data/iscsid.origin_of_time", > "+ touch /var/lib/config-data/iscsid.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,iscsid_config /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/iscsid", > "++ stat -c %y /var/lib/config-data/iscsid.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:24:40.063433512 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/iscsid", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/iscsid", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/iscsid.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/iscsid --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/iscsid --mtime=1970-01-01", > "2018-06-21 11:24:45,859 INFO: 36349 -- Removing container: docker-puppet-iscsid", > "2018-06-21 11:24:46,450 DEBUG: 36349 -- docker-puppet-iscsid", > "2018-06-21 11:24:46,450 INFO: 36349 -- Finished processing puppet configs for iscsid", > "2018-06-21 11:24:46,450 INFO: 36349 -- Starting configuration of glance_api using image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-21 11:24:46,451 DEBUG: 36349 -- config_volume glance_api", > "2018-06-21 11:24:46,451 DEBUG: 36349 -- puppet_tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config", > "2018-06-21 11:24:46,451 DEBUG: 36349 -- manifest include ::tripleo::profile::base::glance::api", > "2018-06-21 11:24:46,451 DEBUG: 36349 -- config_image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-21 11:24:46,451 DEBUG: 36349 -- volumes []", > "2018-06-21 11:24:46,451 INFO: 36349 -- Removing container: docker-puppet-glance_api", > "2018-06-21 11:24:46,514 INFO: 36349 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-21 11:24:50,604 DEBUG: 36348 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-api", > "5e7b63a88a76: Pulling fs layer", > "56e05018c234: Pulling fs layer", > "56e05018c234: Download complete", > "5e7b63a88a76: Verifying Checksum", > "5e7b63a88a76: Download complete", > "5e7b63a88a76: Pull complete", > "56e05018c234: Pull complete", > "Digest: sha256:183deb2657acebac30853e0973dad9bbf1f1f1288cff99eeb24fb4ae2fc7b1d3", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-21 11:24:50,607 DEBUG: 36348 -- NET_HOST enabled", > "2018-06-21 11:24:50,607 DEBUG: 36348 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-cinder --env PUPPET_TAGS=file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line --env NAME=cinder --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpaQF2l5:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-21 11:24:52,240 DEBUG: 36349 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-glance-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-glance-api", > "a5deab52212a: Pulling fs layer", > "8b31454e1757: Pulling fs layer", > "8b31454e1757: Verifying Checksum", > "8b31454e1757: Download complete", > "a5deab52212a: Verifying Checksum", > "a5deab52212a: Download complete", > "a5deab52212a: Pull complete", > "8b31454e1757: Pull complete", > "Digest: sha256:266d9d00d90cc84effdabd7cad9bea244a8fb918a029a3d2bafa4e2af9a72e77", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-21 11:24:52,243 DEBUG: 36349 -- NET_HOST enabled", > "2018-06-21 11:24:52,244 DEBUG: 36349 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-glance_api --env PUPPET_TAGS=file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config --env NAME=glance_api --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpVI3wPz:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-21 11:24:57,631 DEBUG: 36350 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.45 seconds", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/host]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/port]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/workers]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/max_limit]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[database/event_time_to_live]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_api_paste_ini[pipeline:main/pipeline]/ensure: created", > "Notice: /Stage[main]/Panko::Expirer/Cron[panko-expirer]/ensure: created", > "Notice: /Stage[main]/Panko::Logging/Oslo::Log[panko_config]/Panko_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Panko::Db/Oslo::Db[panko_config]/Panko_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Panko::Policy/Oslo::Policy[panko_config]/Panko_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Oslo::Middleware[panko_config]/Panko_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}83ed74d75e6969c931075bd7f8c4c5c6'", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/File[/var/www/cgi-bin/panko]/ensure: created", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/File[panko_wsgi]/ensure: defined content as '{md5}e6f446b6267321fd2251a3e83021181a'", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/Apache::Vhost[panko_wsgi]/Concat[10-panko_wsgi.conf]/File[/etc/httpd/conf.d/10-panko_wsgi.conf]/ensure: defined content as '{md5}bfdade05977c387c2e864c291e53d1ec'", > "Notice: Applied catalog in 1.09 seconds", > " Total: 101", > " Success: 101", > " Changed: 101", > " Out of sync: 101", > " Total: 255", > " Panko api paste ini: 0.00", > " Panko config: 0.20", > " File: 0.35", > " Last run: 1529580296", > " Config retrieval: 4.01", > " Total: 4.64", > " Config: 1529580291", > "Gathering files modified after 2018-06-21 11:24:45.724468085 +0000", > "2018-06-21 11:24:57,631 DEBUG: 36350 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config'", > "+ origin_of_time=/var/lib/config-data/panko.origin_of_time", > "+ touch /var/lib/config-data/panko.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/panko/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/panko.pp\", 32]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/panko/manifests/db.pp\", 59]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/panko/api.pp\", 83]", > "Warning: Scope(Class[Panko::Api]): This Class is deprecated and will be removed in future releases.", > "Warning: Scope(Class[Panko::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/panko", > "++ stat -c %y /var/lib/config-data/panko.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:24:45.724468085 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/panko", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/panko", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/panko.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/panko --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/panko --mtime=1970-01-01", > "2018-06-21 11:24:57,631 INFO: 36350 -- Removing container: docker-puppet-panko", > "2018-06-21 11:24:57,682 DEBUG: 36350 -- docker-puppet-panko", > "2018-06-21 11:24:57,682 INFO: 36350 -- Finished processing puppet configs for panko", > "2018-06-21 11:24:57,682 INFO: 36350 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-21 11:24:57,682 DEBUG: 36350 -- config_volume crond", > "2018-06-21 11:24:57,682 DEBUG: 36350 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-06-21 11:24:57,682 DEBUG: 36350 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-21 11:24:57,682 DEBUG: 36350 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-21 11:24:57,683 DEBUG: 36350 -- volumes []", > "2018-06-21 11:24:57,683 INFO: 36350 -- Removing container: docker-puppet-crond", > "2018-06-21 11:24:57,743 INFO: 36350 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-21 11:24:58,228 DEBUG: 36350 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "a94d9ea04263: Pulling fs layer", > "a94d9ea04263: Verifying Checksum", > "a94d9ea04263: Download complete", > "a94d9ea04263: Pull complete", > "Digest: sha256:cbc58f1f133447db6c3e634ca05251825f6a2ede8528959b5cd6e0cb1c3de3ba", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-21 11:24:58,231 DEBUG: 36350 -- NET_HOST enabled", > "2018-06-21 11:24:58,231 DEBUG: 36350 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpnq5RCK:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-21 11:25:02,722 DEBUG: 36349 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.17 seconds", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_port]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/workers]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_image_direct_url]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_multiple_locations]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_cache_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enabled_import_methods]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/node_staging_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_member_quota]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v1_api]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v2_api]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/os_region_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/stores]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_cache_config[glance_store/os_region_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/registry_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/registry_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[paste_deploy/flavor]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_user]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_pool]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/default_store]/ensure: created", > "Notice: /Stage[main]/Glance::Policy/Oslo::Policy[glance_api_config]/Glance_api_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Db/Oslo::Db[glance_api_config]/Glance_api_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_file]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_file]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Oslo::Middleware[glance_api_config]/Glance_api_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Rabbit[glance_api_config]/Glance_api_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Default[glance_api_config]/Glance_api_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: Applied catalog in 2.48 seconds", > " Total: 44", > " Success: 44", > " Out of sync: 44", > " Changed: 44", > " Skipped: 59", > " Glance cache config: 0.13", > " Last run: 1529580301", > " Glance api config: 2.11", > " Config retrieval: 2.48", > " Total: 4.79", > " Config: 1529580296", > "Gathering files modified after 2018-06-21 11:24:52.430508067 +0000", > "2018-06-21 11:25:02,723 DEBUG: 36349 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config'", > "+ origin_of_time=/var/lib/config-data/glance_api.origin_of_time", > "+ touch /var/lib/config-data/glance_api.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/glance/manifests/config.pp\", 48]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/glance/api.pp\", 202]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/glance/manifests/api/db.pp\", 69]:[\"/etc/puppet/modules/glance/manifests/api.pp\", 371]", > "Warning: Unknown variable: 'default_store_real'. at /etc/puppet/modules/glance/manifests/api.pp:438:9", > "Warning: Scope(Class[Glance::Api]): default_store not provided, it will be automatically set to http", > "Warning: Scope(Class[Glance::Api::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/glance_api", > "++ stat -c %y /var/lib/config-data/glance_api.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:24:52.430508067 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/glance_api", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/glance_api", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/glance_api.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/glance_api --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/glance_api --mtime=1970-01-01", > "2018-06-21 11:25:02,723 INFO: 36349 -- Removing container: docker-puppet-glance_api", > "2018-06-21 11:25:02,758 DEBUG: 36349 -- docker-puppet-glance_api", > "2018-06-21 11:25:02,758 INFO: 36349 -- Finished processing puppet configs for glance_api", > "2018-06-21 11:25:02,759 INFO: 36349 -- Starting configuration of rabbitmq using image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-21 11:25:02,759 DEBUG: 36349 -- config_volume rabbitmq", > "2018-06-21 11:25:02,759 DEBUG: 36349 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-06-21 11:25:02,759 DEBUG: 36349 -- manifest ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }", > "2018-06-21 11:25:02,759 DEBUG: 36349 -- config_image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-21 11:25:02,759 DEBUG: 36349 -- volumes []", > "2018-06-21 11:25:02,759 INFO: 36349 -- Removing container: docker-puppet-rabbitmq", > "2018-06-21 11:25:02,826 INFO: 36349 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-21 11:25:03,806 DEBUG: 36350 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.45 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}13ae5d5b43716a32da6855edd3f15758'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.03 seconds", > " Skipped: 7", > " Total: 9", > " Config retrieval: 0.55", > " Total: 0.56", > " Last run: 1529580303", > " Config: 1529580302", > "Gathering files modified after 2018-06-21 11:24:58.421542912 +0000", > "2018-06-21 11:25:03,806 DEBUG: 36350 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:24:58.421542912 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/crond --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-06-21 11:25:03,806 INFO: 36350 -- Removing container: docker-puppet-crond", > "2018-06-21 11:25:03,854 DEBUG: 36350 -- docker-puppet-crond", > "2018-06-21 11:25:03,855 INFO: 36350 -- Finished processing puppet configs for crond", > "2018-06-21 11:25:03,855 INFO: 36350 -- Starting configuration of haproxy using image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-21 11:25:03,855 DEBUG: 36350 -- config_volume haproxy", > "2018-06-21 11:25:03,855 DEBUG: 36350 -- puppet_tags file,file_line,concat,augeas,cron,haproxy_config", > "2018-06-21 11:25:03,855 DEBUG: 36350 -- manifest exec {'wait-for-settle': command => '/bin/true' }", > "2018-06-21 11:25:03,855 DEBUG: 36350 -- config_image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-21 11:25:03,855 DEBUG: 36350 -- volumes [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']", > "2018-06-21 11:25:03,855 INFO: 36350 -- Removing container: docker-puppet-haproxy", > "2018-06-21 11:25:03,925 INFO: 36350 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-21 11:25:07,413 DEBUG: 36349 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-rabbitmq ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-rabbitmq", > "e603d701fd04: Pulling fs layer", > "e603d701fd04: Verifying Checksum", > "e603d701fd04: Download complete", > "e603d701fd04: Pull complete", > "Digest: sha256:4e07b8b4fd82b69e2a7ba105447776e730b0dd8fffa70a2f13c5c0e612b1ccdc", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-21 11:25:07,416 DEBUG: 36349 -- NET_HOST enabled", > "2018-06-21 11:25:07,416 DEBUG: 36349 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-rabbitmq --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=rabbitmq --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp2mdJE5:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-21 11:25:07,762 DEBUG: 36348 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.06 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Lvm/Augeas[udev options in lvm.conf]/returns: executed successfully", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}7dbba0ad6f107a5d6775f284addccc35'", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v3_api]/ensure: created", > "Notice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]/ensure: created", > "Notice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_version]/ensure: created", > "Notice: /Stage[main]/Cinder::Cron::Db_purge/Cron[cinder-manage db purge]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_listen]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_workers]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/nova_catalog_info]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[key_manager/backend]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_user]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_chunk_size]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_pool]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_unit]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_count]/ensure: created", > "Notice: /Stage[main]/Cinder::Scheduler/Cinder_config[DEFAULT/scheduler_driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]/ensure: created", > "Notice: /Stage[main]/Cinder::Backends/Cinder_config[tripleo_ceph/backend_host]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Cinder::Policy/Oslo::Policy[cinder_config]/Cinder_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Oslo::Middleware[cinder_config]/Cinder_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/File[cinder_wsgi]/ensure: defined content as '{md5}870efbe437d63cd260287cd36472d7b1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File[/etc/sysconfig/openstack-cinder-volume]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]/ensure: created", > "Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/Apache::Vhost[cinder_wsgi]/Concat[10-cinder_wsgi.conf]/File[/etc/httpd/conf.d/10-cinder_wsgi.conf]/ensure: defined content as '{md5}083eb77078c11a38e340afdc95d1c1aa'", > "Notice: Applied catalog in 5.27 seconds", > " Total: 134", > " Success: 134", > " Changed: 134", > " Out of sync: 134", > " Skipped: 36", > " Total: 374", > " File line: 0.00", > " File: 0.36", > " Augeas: 0.69", > " Last run: 1529580306", > " Cinder config: 3.59", > " Total: 9.41", > "Gathering files modified after 2018-06-21 11:24:50.820498564 +0000", > "2018-06-21 11:25:07,762 DEBUG: 36348 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line'", > "+ origin_of_time=/var/lib/config-data/cinder.origin_of_time", > "+ touch /var/lib/config-data/cinder.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/db.pp\", 69]:[\"/etc/puppet/modules/cinder/manifests/init.pp\", 320]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/cinder.pp\", 127]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/api.pp\", 203]:[\"/etc/config.pp\", 2]", > "Warning: Scope(Class[Cinder::Api]): The nova_catalog_admin_info parameter has been deprecated and will be removed in the future release.", > "Warning: Scope(Class[Cinder::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/backup.pp:83:18", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/volume.pp:64:18", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/cinder", > "++ stat -c %y /var/lib/config-data/cinder.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:24:50.820498564 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/cinder", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/cinder", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/cinder.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/cinder --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/cinder --mtime=1970-01-01", > "2018-06-21 11:25:07,762 INFO: 36348 -- Removing container: docker-puppet-cinder", > "2018-06-21 11:25:08,627 DEBUG: 36350 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-haproxy ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-haproxy", > "a82042577283: Pulling fs layer", > "a82042577283: Verifying Checksum", > "a82042577283: Download complete", > "a82042577283: Pull complete", > "Digest: sha256:79a7901cc6403d11b4e7f6978d7e99a1879972ccb61f430f5660695c8683d7a0", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-21 11:25:08,630 DEBUG: 36350 -- NET_HOST enabled", > "2018-06-21 11:25:08,630 DEBUG: 36350 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-haproxy --env PUPPET_TAGS=file,file_line,concat,augeas,cron,haproxy_config --env NAME=haproxy --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpc69WRK:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/ipa/ca.crt:/etc/ipa/ca.crt:ro --volume /etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro --volume /etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro --volume /etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-21 11:25:08,642 DEBUG: 36348 -- docker-puppet-cinder", > "2018-06-21 11:25:08,642 INFO: 36348 -- Finished processing puppet configs for cinder", > "2018-06-21 11:25:08,642 INFO: 36348 -- Starting configuration of swift using image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-21 11:25:08,642 DEBUG: 36348 -- config_volume swift", > "2018-06-21 11:25:08,642 DEBUG: 36348 -- puppet_tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server", > "2018-06-21 11:25:08,642 DEBUG: 36348 -- manifest include ::tripleo::profile::base::swift::proxy", > "include ::tripleo::profile::base::swift::storage", > "2018-06-21 11:25:08,643 DEBUG: 36348 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-21 11:25:08,643 DEBUG: 36348 -- volumes []", > "2018-06-21 11:25:08,643 INFO: 36348 -- Removing container: docker-puppet-swift", > "2018-06-21 11:25:08,702 INFO: 36348 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-21 11:25:08,705 DEBUG: 36348 -- NET_HOST enabled", > "2018-06-21 11:25:08,705 DEBUG: 36348 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-swift --env PUPPET_TAGS=file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server --env NAME=swift --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpO6J8OF:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-21 11:25:16,980 DEBUG: 36348 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.74 seconds", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/api_class]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/username]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_name]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[filter:cache/memcache_servers]/value: value changed '127.0.0.1:11211' to '172.17.1.16:11211'", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/auto_create_account_prefix]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/concurrency]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/expiring_objects_account_name]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/interval]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/process]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/processes]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/reclaim_age]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/recon_cache_path]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/report_interval]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_level]/ensure: created", > "Notice: /Stage[main]/Rsync::Server/Xinetd::Service[rsync]/File[/rsync]/ensure: defined content as '{md5}9389435d40399d3f3b3a0e9944346f87'", > "Notice: /Stage[main]/Rsync::Server/Concat[/etc/rsyncd.conf]/File[/etc/rsyncd.conf]/content: content changed '{md5}c63fccb45c0dcbbbe17d0f4bdba920ec' to '{md5}9b8125614d1860f206abb9767c7b2557'", > "Notice: /Stage[main]/Swift/Swift_config[swift-hash/swift_hash_path_suffix]/value: value changed '%SWIFT_HASH_PATH_SUFFIX%' to 'OJ2m4Tm9Ho10GUzJVC46bPi1G'", > "Notice: /Stage[main]/Swift/Swift_config[swift-constraints/max_header_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/bind_ip]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/workers]/value: value changed '8' to 'auto'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_level]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_headers]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_address]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[pipeline:main/pipeline]/value: value changed 'catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server' to 'catch_errors healthcheck proxy-logging cache ratelimit bulk tempurl formpost authtoken keystone staticweb copy container_quotas account_quotas slo dlo versioned_writes proxy-logging proxy-server'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_level]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_address]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/log_handoffs]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/allow_account_management]/value: value changed 'true' to 'True'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/account_autocreate]/value: value changed 'true' to 'True'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/node_timeout]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Cache/Swift_proxy_config[filter:cache/memcache_servers]/value: value changed '127.0.0.1:11211' to '172.17.1.16:11211'", > "Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/operator_roles]/value: value changed 'admin, SwiftOperator' to 'admin, swiftoperator, ResellerAdmin'", > "Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/reseller_prefix]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/File[/var/cache/swift]/mode: mode changed '0755' to '0700'", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/signing_dir]/value: value changed '/tmp/keystone-signing-swift' to '/var/cache/swift'", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_plugin]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/username]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/password]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/delay_auth_decision]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/cache]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/include_service_catalog]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/url_base]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/clock_accuracy]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/max_sleep_time_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/log_sleep_time_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/rate_buffer_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/account_ratelimit]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Formpost/Swift_proxy_config[filter:formpost/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_containers_per_extraction]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_failed_extractions]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_deletes_per_request]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/yield_frequency]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Versioned_writes/Swift_proxy_config[filter:versioned_writes/allow_versioned_writes]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_segments]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/min_segment_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_after_segment]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_segments_per_sec]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_get_time]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_after_segment]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_segments_per_sec]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/max_get_time]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Copy/Swift_proxy_config[filter:copy/object_post_as_copy]/value: value changed 'false' to 'True'", > "Notice: /Stage[main]/Swift::Proxy::Container_quotas/Swift_proxy_config[filter:container_quotas/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Account_quotas/Swift_proxy_config[filter:account_quotas/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/disable_encryption]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/keymaster_config_path]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/auth_pipeline_check]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/auth_uri]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node/d1]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/File[/etc/swift/account-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/File[/etc/swift/container-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/File[/etc/swift/object-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/etc/swift/account-server.conf]/ensure: defined content as '{md5}83d99714b5d1e495a61737a51a8170ec'", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/etc/swift/container-server.conf]/ensure: defined content as '{md5}578dba3f3fc75f3e5b6335031df3cec8'", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/etc/swift/object-server.conf]/ensure: defined content as '{md5}69f91109e3d7181d7f2d08af24922938'", > "Notice: Applied catalog in 0.45 seconds", > " Total: 97", > " Success: 97", > " Total: 192", > " Skipped: 37", > " Out of sync: 97", > " Changed: 97", > " Swift config: 0.00", > " Swift keymaster config: 0.01", > " Swift object expirer config: 0.01", > " Swift proxy config: 0.17", > " Last run: 1529580316", > " Config retrieval: 2.09", > " Total: 2.32", > " Config: 1529580313", > "Gathering files modified after 2018-06-21 11:25:08.903601955 +0000", > "2018-06-21 11:25:16,980 DEBUG: 36348 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server'", > "+ origin_of_time=/var/lib/config-data/swift.origin_of_time", > "+ touch /var/lib/config-data/swift.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 147]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/proxy.pp\", 163]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 148]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/proxy.pp\", 165]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 148]", > "Warning: Unknown variable: 'methods_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:100:56", > "Warning: Unknown variable: 'incoming_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:101:56", > "Warning: Unknown variable: 'incoming_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:102:56", > "Warning: Unknown variable: 'outgoing_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:103:56", > "Warning: Unknown variable: 'outgoing_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:104:56", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the object storage server has changed from 6000 to 6200 and will be changed in a later release", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the container storage server has changed from 6001 to 6201 and will be changed in a later release", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the account storage server has changed from 6002 to 6202 and will be changed in a later release", > "Warning: Class 'xinetd' is already defined at /etc/config.pp:6; cannot redefine at /etc/puppet/modules/xinetd/manifests/init.pp:12", > "Warning: Unknown variable: 'xinetd::params::default_user'. at /etc/puppet/modules/xinetd/manifests/service.pp:110:14", > "Warning: Unknown variable: 'xinetd::params::default_group'. at /etc/puppet/modules/xinetd/manifests/service.pp:116:15", > "Warning: Unknown variable: 'xinetd::confdir'. at /etc/puppet/modules/xinetd/manifests/service.pp:161:13", > "Warning: Unknown variable: 'xinetd::service_name'. at /etc/puppet/modules/xinetd/manifests/service.pp:166:24", > "Warning: Unknown variable: 'xinetd::confdir'. at /etc/puppet/modules/xinetd/manifests/service.pp:167:21", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/swift/manifests/storage/server.pp\", 183]:", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/storage/server.pp\", 197]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/swift", > "++ stat -c %y /var/lib/config-data/swift.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:25:08.903601955 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/swift", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/swift", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/swift.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/swift --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/swift --mtime=1970-01-01", > "2018-06-21 11:25:16,980 INFO: 36348 -- Removing container: docker-puppet-swift", > "2018-06-21 11:25:17,025 DEBUG: 36348 -- docker-puppet-swift", > "2018-06-21 11:25:17,025 INFO: 36348 -- Finished processing puppet configs for swift", > "2018-06-21 11:25:17,025 INFO: 36348 -- Starting configuration of heat_api_cfn using image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-21 11:25:17,026 DEBUG: 36348 -- config_volume heat_api_cfn", > "2018-06-21 11:25:17,026 DEBUG: 36348 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-06-21 11:25:17,026 DEBUG: 36348 -- manifest include ::tripleo::profile::base::heat::api_cfn", > "2018-06-21 11:25:17,026 DEBUG: 36348 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-21 11:25:17,026 DEBUG: 36348 -- volumes []", > "2018-06-21 11:25:17,026 INFO: 36348 -- Removing container: docker-puppet-heat_api_cfn", > "2018-06-21 11:25:17,087 INFO: 36348 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-21 11:25:17,183 DEBUG: 36350 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.45 seconds", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/content: content changed '{md5}1f337186b0e1ba5ee82760cb437fb810' to '{md5}3e602920be68dd9114246aadb54dcae7'", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/mode: mode changed '0644' to '0640'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone]/File[/var/lib/tripleo]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone]/File[/var/lib/tripleo/pacemaker-restarts]/ensure: created", > "Notice: Applied catalog in 0.27 seconds", > " Skipped: 33", > " Total: 79", > " Config retrieval: 2.76", > " Total: 2.80", > "Gathering files modified after 2018-06-21 11:25:08.830601551 +0000", > "2018-06-21 11:25:17,184 DEBUG: 36350 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,haproxy_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,haproxy_config'", > "+ origin_of_time=/var/lib/config-data/haproxy.origin_of_time", > "+ touch /var/lib/config-data/haproxy.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,haproxy_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/pacemaker/haproxy_with_vip.pp\", 65]:", > "Warning: Scope(Haproxy::Config[haproxy]): haproxy: The $merge_options parameter will default to true in the next major release. Please review the documentation regarding the implications.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/haproxy", > "++ stat -c %y /var/lib/config-data/haproxy.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:25:08.830601551 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/haproxy", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/haproxy", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/haproxy.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/haproxy --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/haproxy --mtime=1970-01-01", > "2018-06-21 11:25:17,184 INFO: 36350 -- Removing container: docker-puppet-haproxy", > "2018-06-21 11:25:17,222 DEBUG: 36350 -- docker-puppet-haproxy", > "2018-06-21 11:25:17,222 INFO: 36350 -- Finished processing puppet configs for haproxy", > "2018-06-21 11:25:17,222 INFO: 36350 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-21 11:25:17,222 DEBUG: 36350 -- config_volume ceilometer", > "2018-06-21 11:25:17,222 DEBUG: 36350 -- puppet_tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config", > "2018-06-21 11:25:17,223 DEBUG: 36350 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "include ::tripleo::profile::base::ceilometer::agent::notification", > "2018-06-21 11:25:17,223 DEBUG: 36350 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-21 11:25:17,223 DEBUG: 36350 -- volumes []", > "2018-06-21 11:25:17,223 INFO: 36350 -- Removing container: docker-puppet-ceilometer", > "2018-06-21 11:25:17,292 INFO: 36350 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-21 11:25:17,721 DEBUG: 36348 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn", > "15497368e843: Already exists", > "4089b2a1d02c: Pulling fs layer", > "4089b2a1d02c: Verifying Checksum", > "4089b2a1d02c: Download complete", > "4089b2a1d02c: Pull complete", > "Digest: sha256:bbcf3cc8eeb6d8910642b40cfa9fe544a33bee49cfb4512abe49c5bf176ed8f0", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-21 11:25:17,723 DEBUG: 36348 -- NET_HOST enabled", > "2018-06-21 11:25:17,724 DEBUG: 36348 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat_api_cfn --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api_cfn --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp3lV3mL:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-21 11:25:19,729 DEBUG: 36350 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-central ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-central", > "333aa6b2b383: Pulling fs layer", > "1eb9ef5adcb4: Pulling fs layer", > "333aa6b2b383: Verifying Checksum", > "333aa6b2b383: Download complete", > "1eb9ef5adcb4: Verifying Checksum", > "1eb9ef5adcb4: Download complete", > "333aa6b2b383: Pull complete", > "1eb9ef5adcb4: Pull complete", > "Digest: sha256:3f638e03aaf1d7e303183e06ff1627a5a0efeaef228a7be1e9667ae62d7d6a1b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-21 11:25:19,732 DEBUG: 36350 -- NET_HOST enabled", > "2018-06-21 11:25:19,732 DEBUG: 36350 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-ceilometer --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpQOXOSI:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-21 11:25:20,035 DEBUG: 36349 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.90 seconds", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/group: group changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]/ensure: created", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/ensure: defined content as '{md5}b126e4b8423a26246952d34c225c6fdd'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/ensure: defined content as '{md5}12f8d1a1f9f57f23c1be6c7bf2286e73'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]/ensure: defined content as '{md5}44d4ef5cb86ab30e6127e83939ef09c4'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]/ensure: created", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]/ensure: defined content as '{md5}91d370d2c5a1af171c9d5b5985fca733'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/ensure: defined content as '{md5}1030abc4db405b5f2969643e99bc7435'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/content: content changed '{md5}b346ec0a8320f85f795bf612f6b02da7' to '{md5}1e1a80b34927c980a0411cf7e41d2054'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/mode: mode changed '0644' to '0640'", > "Notice: Applied catalog in 0.05 seconds", > " Total: 12", > " Success: 12", > " Total: 20", > " Out of sync: 9", > " Changed: 9", > " Config retrieval: 1.07", > " Total: 1.10", > " Last run: 1529580319", > " Config: 1529580318", > "Gathering files modified after 2018-06-21 11:25:07.653595038 +0000", > "2018-06-21 11:25:20,035 DEBUG: 36349 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/rabbitmq.origin_of_time", > "+ touch /var/lib/config-data/rabbitmq.origin_of_time", > "Warning: ModuleLoader: module 'rabbitmq' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/rabbitmq", > "++ stat -c %y /var/lib/config-data/rabbitmq.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:25:07.653595038 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/rabbitmq", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/rabbitmq", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/rabbitmq.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/rabbitmq --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/rabbitmq --mtime=1970-01-01", > "2018-06-21 11:25:20,035 INFO: 36349 -- Removing container: docker-puppet-rabbitmq", > "2018-06-21 11:25:20,198 DEBUG: 36349 -- docker-puppet-rabbitmq", > "2018-06-21 11:25:20,198 INFO: 36349 -- Finished processing puppet configs for rabbitmq", > "2018-06-21 11:25:20,198 INFO: 36349 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-21 11:25:20,199 DEBUG: 36349 -- config_volume neutron", > "2018-06-21 11:25:20,199 DEBUG: 36349 -- puppet_tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-06-21 11:25:20,199 DEBUG: 36349 -- manifest include tripleo::profile::base::neutron::server", > "include ::tripleo::profile::base::neutron::plugins::ml2", > "include tripleo::profile::base::neutron::dhcp", > "include tripleo::profile::base::neutron::l3", > "include tripleo::profile::base::neutron::metadata", > "include ::tripleo::profile::base::neutron::ovs", > "2018-06-21 11:25:20,199 DEBUG: 36349 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-21 11:25:20,199 DEBUG: 36349 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-06-21 11:25:20,199 INFO: 36349 -- Removing container: docker-puppet-neutron", > "2018-06-21 11:25:20,266 INFO: 36349 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-21 11:25:24,628 DEBUG: 36349 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-server ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-server", > "ea1d509b6f44: Pulling fs layer", > "e9f9993bb931: Pulling fs layer", > "e9f9993bb931: Verifying Checksum", > "e9f9993bb931: Download complete", > "ea1d509b6f44: Verifying Checksum", > "ea1d509b6f44: Download complete", > "ea1d509b6f44: Pull complete", > "e9f9993bb931: Pull complete", > "Digest: sha256:af12594500608f07f8d38590e2c9b2983e5d81ae8b63aec042f36411b0e76adc", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-21 11:25:24,631 DEBUG: 36349 -- NET_HOST enabled", > "2018-06-21 11:25:24,631 DEBUG: 36349 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-neutron --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 --env NAME=neutron --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpiCimUL:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-21 11:25:27,759 DEBUG: 36350 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.38 seconds", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[database/event_time_to_live]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[database/metering_time_to_live]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/filter_project]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/archive_policy]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/resources_definition_file]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/interface]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Notification/File[event_pipeline]/ensure: defined content as '{md5}dafea5c96d5da5251f9b8a275c6d71aa'", > "Notice: /Stage[main]/Ceilometer::Agent::Notification/Ceilometer_config[notification/ack_on_event_error]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created", > "Notice: Applied catalog in 0.64 seconds", > " Total: 31", > " Success: 31", > " Total: 158", > " Out of sync: 31", > " Changed: 31", > " Skipped: 35", > " Ceilometer config: 0.52", > " Config retrieval: 1.61", > " Last run: 1529580326", > " Total: 2.14", > " Config: 1529580324", > "Gathering files modified after 2018-06-21 11:25:19.954661641 +0000", > "2018-06-21 11:25:27,759 DEBUG: 36350 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config'", > "+ origin_of_time=/var/lib/config-data/ceilometer.origin_of_time", > "+ touch /var/lib/config-data/ceilometer.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config /etc/config.pp", > "Warning: ModuleLoader: module 'ceilometer' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/config.pp\", 35]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer.pp\", 111]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/agent/notification.pp\", 118]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer/agent/notification.pp\", 34]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/ceilometer", > "++ stat -c %y /var/lib/config-data/ceilometer.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:25:19.954661641 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/ceilometer", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/ceilometer", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/ceilometer.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/ceilometer --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/ceilometer --mtime=1970-01-01", > "2018-06-21 11:25:27,759 INFO: 36350 -- Removing container: docker-puppet-ceilometer", > "2018-06-21 11:25:27,795 DEBUG: 36350 -- docker-puppet-ceilometer", > "2018-06-21 11:25:27,795 INFO: 36350 -- Finished processing puppet configs for ceilometer", > "2018-06-21 11:25:31,485 DEBUG: 36348 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: /Stage[main]/Heat::Api_cfn/Heat_config[heat_api_cfn/bind_host]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}6bfb91ec3128b1252913d8ba04a9c38f'", > "Notice: /Stage[main]/Apache::Mod::Headers/Apache::Mod[headers]/File[headers.load]/ensure: defined content as '{md5}96094c96352002c43ada5bdf8650ff38'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[heat_api_cfn_wsgi]/ensure: defined content as '{md5}c3ae61ab87649c8cdfab8977da2b194b'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/Apache::Vhost[heat_api_cfn_wsgi]/Concat[10-heat_api_cfn_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_cfn_wsgi.conf]/ensure: defined content as '{md5}dec9ed78f8f4a5b645106fa3b8a3a776'", > "Notice: Applied catalog in 2.53 seconds", > " Total: 337", > " File: 0.22", > " Heat config: 1.57", > " Last run: 1529580329", > " Config retrieval: 4.56", > " Total: 6.41", > " Config: 1529580322", > "Gathering files modified after 2018-06-21 11:25:17.920650848 +0000", > "2018-06-21 11:25:31,485 DEBUG: 36348 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/heat_api_cfn.origin_of_time", > "+ touch /var/lib/config-data/heat_api_cfn.origin_of_time", > " with Stdlib::Compat::Integer. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/wsgi/apache_api_cfn.pp\", 125]:[\"/etc/config.pp\", 2]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat_api_cfn", > "++ stat -c %y /var/lib/config-data/heat_api_cfn.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:25:17.920650848 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat_api_cfn", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat_api_cfn", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat_api_cfn.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/heat_api_cfn --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/heat_api_cfn --mtime=1970-01-01", > "2018-06-21 11:25:31,486 INFO: 36348 -- Removing container: docker-puppet-heat_api_cfn", > "2018-06-21 11:25:31,530 DEBUG: 36348 -- docker-puppet-heat_api_cfn", > "2018-06-21 11:25:31,530 INFO: 36348 -- Finished processing puppet configs for heat_api_cfn", > "2018-06-21 11:25:37,054 DEBUG: 36349 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.55 seconds", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_url]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/username]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/password]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/endpoint_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/tenant_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_status_changes]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_data_changes]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/max_l3_agents_per_router]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/api_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/rpc_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_scheduler_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_distributed]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/enable_dvr]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/allow_automatic_l3agent_failover]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_firewall_rule]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_network_gateway]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_packet_filter]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_isolated_metadata]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/force_metadata]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_metadata_network]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/resync_interval]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/interface_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_dns_servers]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/interface_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/agent_mode]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/vxlan_udp_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_password]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_userid]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Neutron::Policy/Oslo::Policy[neutron_config]/Neutron_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Oslo::Middleware[neutron_config]/Neutron_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created", > "Notice: Applied catalog in 1.71 seconds", > " Total: 107", > " Success: 107", > " Changed: 107", > " Out of sync: 107", > " Total: 359", > " Skipped: 44", > " Neutron api config: 0.00", > " Neutron l3 agent config: 0.02", > " Neutron agent ovs: 0.02", > " Neutron metadata agent config: 0.02", > " Neutron plugin ml2: 0.03", > " Neutron dhcp agent config: 0.10", > " Neutron config: 1.23", > " Last run: 1529580335", > " Config retrieval: 4.04", > " Total: 5.52", > " Config: 1529580330", > "Gathering files modified after 2018-06-21 11:25:24.828687158 +0000", > "2018-06-21 11:25:37,054 DEBUG: 36349 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2'", > "+ origin_of_time=/var/lib/config-data/neutron.origin_of_time", > "+ touch /var/lib/config-data/neutron.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 /etc/config.pp", > "Warning: Scope(Class[Neutron]): neutron::rabbit_host, neutron::rabbit_hosts, neutron::rabbit_password, neutron::rabbit_port, neutron::rabbit_user, neutron::rabbit_virtual_host and neutron::rpc_backend are deprecated. Please use neutron::default_transport_url instead.", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/neutron/manifests/init.pp\", 530]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/server.pp\", 104]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/config.pp\", 132]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp\", 141]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/db.pp\", 69]:[\"/etc/puppet/modules/neutron/manifests/server.pp\", 315]", > "Warning: Scope(Class[Neutron::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Unknown variable: '::neutron::params::metadata_agent_package'. at /etc/puppet/modules/neutron/manifests/agents/metadata.pp:122:6", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 219]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/ovs.pp\", 59]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/neutron", > "++ stat -c %y /var/lib/config-data/neutron.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:25:24.828687158 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/neutron", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/neutron", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/neutron.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/neutron --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/neutron --mtime=1970-01-01", > "2018-06-21 11:25:37,054 INFO: 36349 -- Removing container: docker-puppet-neutron", > "2018-06-21 11:25:37,092 DEBUG: 36349 -- docker-puppet-neutron", > "2018-06-21 11:25:37,092 INFO: 36349 -- Finished processing puppet configs for neutron", > "2018-06-21 11:25:37,093 INFO: 36349 -- Starting configuration of horizon using image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-21 11:25:37,093 DEBUG: 36349 -- config_volume horizon", > "2018-06-21 11:25:37,093 DEBUG: 36349 -- puppet_tags file,file_line,concat,augeas,cron,horizon_config", > "2018-06-21 11:25:37,093 DEBUG: 36349 -- manifest include ::tripleo::profile::base::horizon", > "2018-06-21 11:25:37,093 DEBUG: 36349 -- config_image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-21 11:25:37,093 DEBUG: 36349 -- volumes []", > "2018-06-21 11:25:37,093 INFO: 36349 -- Removing container: docker-puppet-horizon", > "2018-06-21 11:25:37,155 INFO: 36349 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-21 11:25:42,344 DEBUG: 36349 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-horizon ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-horizon", > "76e0e41ffb2e: Pulling fs layer", > "76e0e41ffb2e: Download complete", > "76e0e41ffb2e: Pull complete", > "Digest: sha256:985bc1250661a931ac3368fe39a6651116c123db6c18789bfdb7da2c61741b0d", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-21 11:25:42,347 DEBUG: 36349 -- NET_HOST enabled", > "2018-06-21 11:25:42,347 DEBUG: 36349 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-horizon --env PUPPET_TAGS=file,file_line,concat,augeas,cron,horizon_config --env NAME=horizon --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmprZkLOh:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-21 11:25:52,341 DEBUG: 36349 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.39 seconds", > "Notice: /Stage[main]/Apache::Mod::Remoteip/File[remoteip.conf]/ensure: defined content as '{md5}5e70f28d6cca0d978242202de6e8e0e3'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon]/mode: mode changed '0750' to '0751'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon/horizon.log]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}05a4d6cbec792391f771b5d1a68687d9'", > "Notice: /Stage[main]/Apache::Mod::Remoteip/Apache::Mod[remoteip]/File[remoteip.load]/ensure: defined content as '{md5}118eb7518a1d018a162d23dfe32c4bad'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/content: content changed '{md5}601e633104479c5b9ee828b4bae911ac' to '{md5}4fe0349dab6bd1d72bdf0b99a86ce08e'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/owner: owner changed 'horizon' to 'apache'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/group: group changed 'horizon' to 'apache'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/etc/httpd/conf.d/openstack-dashboard.conf]/content: content changed '{md5}4cb4b1391d3553951208fad1ce791e5c' to '{md5}3f4b1c53d0e150dae37b3ee5dcaf622d'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[10-horizon_vhost.conf]/File[/etc/httpd/conf.d/10-horizon_vhost.conf]/ensure: defined content as '{md5}bc5cb3b80367d89e79e323750fcbb4f0'", > "Notice: Applied catalog in 0.72 seconds", > " Total: 86", > " Success: 86", > " Total: 172", > " Out of sync: 84", > " Changed: 84", > " File: 0.20", > " Last run: 1529580351", > " Config retrieval: 2.78", > " Total: 2.98", > " Config: 1529580347", > "Gathering files modified after 2018-06-21 11:25:42.523775828 +0000", > "2018-06-21 11:25:52,341 DEBUG: 36349 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,horizon_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,horizon_config'", > "+ origin_of_time=/var/lib/config-data/horizon.origin_of_time", > "+ touch /var/lib/config-data/horizon.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,horizon_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/horizon.pp\", 97]:[\"/etc/config.pp\", 2]", > "Warning: ModuleLoader: module 'horizon' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Undefined variable ''; ", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 559]:[\"/etc/config.pp\", 2]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 560]:[\"/etc/config.pp\", 2]", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 562]:[\"/etc/config.pp\", 2]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/horizon", > "++ stat -c %y /var/lib/config-data/horizon.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-21 11:25:42.523775828 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/horizon", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/horizon", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/horizon.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/horizon --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/horizon --mtime=1970-01-01", > "2018-06-21 11:25:52,342 INFO: 36349 -- Removing container: docker-puppet-horizon", > "2018-06-21 11:25:52,389 DEBUG: 36349 -- docker-puppet-horizon", > "2018-06-21 11:25:52,390 INFO: 36349 -- Finished processing puppet configs for horizon", > "2018-06-21 11:25:52,391 DEBUG: 36347 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-06-21 11:25:52,391 DEBUG: 36347 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-06-21 11:25:52,394 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/memcached/etc/sysconfig.md5sum for config_volume /var/lib/config-data/memcached/etc/sysconfig", > "2018-06-21 11:25:52,394 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-06-21 11:25:52,394 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-06-21 11:25:52,395 DEBUG: 36347 -- Updating config hash for mysql_bootstrap, config_volume=heat_api_cfn hash=3d0d90fbc91e503875356f69c121b5d6", > "2018-06-21 11:25:52,395 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-06-21 11:25:52,395 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-06-21 11:25:52,395 DEBUG: 36347 -- Updating config hash for rabbitmq_bootstrap, config_volume=heat_api_cfn hash=4cfc58610a6ee8abac132483d008d519", > "2018-06-21 11:25:52,395 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/memcached/etc/sysconfig.md5sum for config_volume /var/lib/config-data/memcached/etc/sysconfig", > "2018-06-21 11:25:52,397 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_placement.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_placement", > "2018-06-21 11:25:52,397 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/nova_placement.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_placement", > "2018-06-21 11:25:52,397 DEBUG: 36347 -- Updating config hash for nova_placement, config_volume=heat_api_cfn hash=cb9132c83fe00c38e2a3e1886a257011", > "2018-06-21 11:25:52,398 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-06-21 11:25:52,398 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-06-21 11:25:52,398 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/heat/etc/heat.md5sum for config_volume /var/lib/config-data/heat/etc/heat", > "2018-06-21 11:25:52,398 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/heat/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/heat/etc/my.cnf.d", > "2018-06-21 11:25:52,398 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data.md5sum for config_volume /var/lib/config-data", > "2018-06-21 11:25:52,398 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/swift/etc", > "2018-06-21 11:25:52,398 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-06-21 11:25:52,398 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-06-21 11:25:52,399 DEBUG: 36347 -- Updating config hash for keystone_cron, config_volume=heat_api_cfn hash=f97096ae4b768431afe77865ce7ac26a", > "2018-06-21 11:25:52,399 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/panko/etc.md5sum for config_volume /var/lib/config-data/panko/etc", > "2018-06-21 11:25:52,399 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/panko/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/panko/etc/my.cnf.d", > "2018-06-21 11:25:52,399 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-06-21 11:25:52,399 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-06-21 11:25:52,399 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-06-21 11:25:52,399 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-06-21 11:25:52,399 DEBUG: 36347 -- Updating config hash for keystone_db_sync, config_volume=heat_api_cfn hash=f97096ae4b768431afe77865ce7ac26a", > "2018-06-21 11:25:52,400 DEBUG: 36347 -- Updating config hash for keystone, config_volume=heat_api_cfn hash=f97096ae4b768431afe77865ce7ac26a", > "2018-06-21 11:25:52,400 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/aodh/etc/aodh.md5sum for config_volume /var/lib/config-data/aodh/etc/aodh", > "2018-06-21 11:25:52,400 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/aodh/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/aodh/etc/my.cnf.d", > "2018-06-21 11:25:52,400 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-21 11:25:52,400 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-21 11:25:52,400 DEBUG: 36347 -- Updating config hash for neutron_ovs_bridge, config_volume=heat_api_cfn hash=1458ccfb2d6aca5d6f994c0721e6e0a6", > "2018-06-21 11:25:52,400 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/cinder/etc/cinder.md5sum for config_volume /var/lib/config-data/cinder/etc/cinder", > "2018-06-21 11:25:52,400 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/cinder/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/cinder/etc/my.cnf.d", > "2018-06-21 11:25:52,400 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-06-21 11:25:52,400 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-06-21 11:25:52,400 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-06-21 11:25:52,400 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-06-21 11:25:52,401 DEBUG: 36347 -- Updating config hash for glance_api_db_sync, config_volume=heat_api_cfn hash=ce635a7b60e8e89d9f8a6130e0a31be1", > "2018-06-21 11:25:52,401 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/neutron/etc.md5sum for config_volume /var/lib/config-data/neutron/etc", > "2018-06-21 11:25:52,401 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/neutron/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/neutron/etc/my.cnf.d", > "2018-06-21 11:25:52,401 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/neutron/usr/share.md5sum for config_volume /var/lib/config-data/neutron/usr/share", > "2018-06-21 11:25:52,401 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/sahara/etc/sahara.md5sum for config_volume /var/lib/config-data/sahara/etc/sahara", > "2018-06-21 11:25:52,401 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/horizon.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon", > "2018-06-21 11:25:52,401 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/horizon.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon", > "2018-06-21 11:25:52,401 DEBUG: 36347 -- Updating config hash for horizon, config_volume=heat_api_cfn hash=01eaa54e33f1ab9626f72cb20288172d", > "2018-06-21 11:25:52,403 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/clustercheck.md5sum for config_volume /var/lib/config-data/puppet-generated/clustercheck", > "2018-06-21 11:25:52,403 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/clustercheck.md5sum for config_volume /var/lib/config-data/puppet-generated/clustercheck", > "2018-06-21 11:25:52,403 DEBUG: 36347 -- Updating config hash for clustercheck, config_volume=heat_api_cfn hash=75dd38d4613c9ab710ec801025de1f50", > "2018-06-21 11:25:52,403 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-06-21 11:25:52,403 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-06-21 11:25:52,404 DEBUG: 36347 -- Updating config hash for mysql_restart_bundle, config_volume=heat_api_cfn hash=3d0d90fbc91e503875356f69c121b5d6", > "2018-06-21 11:25:52,404 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/haproxy.md5sum for config_volume /var/lib/config-data/puppet-generated/haproxy", > "2018-06-21 11:25:52,404 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/haproxy.md5sum for config_volume /var/lib/config-data/puppet-generated/haproxy", > "2018-06-21 11:25:52,404 DEBUG: 36347 -- Updating config hash for haproxy_restart_bundle, config_volume=heat_api_cfn hash=819c2c449f0801f24d554f23abe33b2b", > "2018-06-21 11:25:52,404 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-06-21 11:25:52,404 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-06-21 11:25:52,404 DEBUG: 36347 -- Updating config hash for rabbitmq_restart_bundle, config_volume=heat_api_cfn hash=4cfc58610a6ee8abac132483d008d519", > "2018-06-21 11:25:52,404 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/horizon/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon/etc", > "2018-06-21 11:25:52,404 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/redis.md5sum for config_volume /var/lib/config-data/puppet-generated/redis", > "2018-06-21 11:25:52,404 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/redis.md5sum for config_volume /var/lib/config-data/puppet-generated/redis", > "2018-06-21 11:25:52,404 DEBUG: 36347 -- Updating config hash for redis_restart_bundle, config_volume=heat_api_cfn hash=0b60eeb5d101188bb85471a93263935c", > "2018-06-21 11:25:52,406 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-21 11:25:52,406 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-21 11:25:52,406 DEBUG: 36347 -- Updating config hash for cinder_volume_restart_bundle, config_volume=heat_api_cfn hash=bb0c656f2b5827c6f76dc6bfff10f6fe", > "2018-06-21 11:25:52,406 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-21 11:25:52,406 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-21 11:25:52,406 DEBUG: 36347 -- Updating config hash for gnocchi_statsd, config_volume=heat_api_cfn hash=d5d5bb348d5143d33909ba017cca92ca", > "2018-06-21 11:25:52,407 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-21 11:25:52,407 DEBUG: 36347 -- Updating config hash for cinder_backup_restart_bundle, config_volume=heat_api_cfn hash=bb0c656f2b5827c6f76dc6bfff10f6fe", > "2018-06-21 11:25:52,407 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-21 11:25:52,407 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-21 11:25:52,407 DEBUG: 36347 -- Updating config hash for gnocchi_metricd, config_volume=heat_api_cfn hash=d5d5bb348d5143d33909ba017cca92ca", > "2018-06-21 11:25:52,407 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-06-21 11:25:52,407 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-06-21 11:25:52,407 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/ceilometer/etc/ceilometer.md5sum for config_volume /var/lib/config-data/ceilometer/etc/ceilometer", > "2018-06-21 11:25:52,407 DEBUG: 36347 -- Updating config hash for gnocchi_api, config_volume=heat_api_cfn hash=d5d5bb348d5143d33909ba017cca92ca", > "2018-06-21 11:25:52,409 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,409 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,409 DEBUG: 36347 -- Updating config hash for swift_container_updater, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-21 11:25:52,410 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-21 11:25:52,410 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-21 11:25:52,410 DEBUG: 36347 -- Updating config hash for aodh_evaluator, config_volume=heat_api_cfn hash=50b2e72486b0ea957bb6c2b4de67a283", > "2018-06-21 11:25:52,410 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-21 11:25:52,410 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-21 11:25:52,410 DEBUG: 36347 -- Updating config hash for nova_scheduler, config_volume=heat_api_cfn hash=bdc1e1f03b2049f23cdf4e1606eb96ce", > "2018-06-21 11:25:52,410 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,410 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,410 DEBUG: 36347 -- Updating config hash for swift_object_server, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-21 11:25:52,410 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-21 11:25:52,410 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-21 11:25:52,410 DEBUG: 36347 -- Updating config hash for cinder_api, config_volume=heat_api_cfn hash=bb0c656f2b5827c6f76dc6bfff10f6fe", > "2018-06-21 11:25:52,411 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,411 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,411 DEBUG: 36347 -- Updating config hash for swift_proxy, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-21 11:25:52,411 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-21 11:25:52,411 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-21 11:25:52,411 DEBUG: 36347 -- Updating config hash for neutron_dhcp, config_volume=heat_api_cfn hash=1458ccfb2d6aca5d6f994c0721e6e0a6", > "2018-06-21 11:25:52,411 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-06-21 11:25:52,411 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-06-21 11:25:52,411 DEBUG: 36347 -- Updating config hash for heat_api, config_volume=heat_api_cfn hash=7b101c7a1a29ec36db15202c8603168c", > "2018-06-21 11:25:52,411 DEBUG: 36347 -- Updating config hash for swift_object_auditor, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-21 11:25:52,412 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-21 11:25:52,412 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-21 11:25:52,412 DEBUG: 36347 -- Updating config hash for neutron_metadata_agent, config_volume=heat_api_cfn hash=1458ccfb2d6aca5d6f994c0721e6e0a6", > "2018-06-21 11:25:52,412 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-21 11:25:52,412 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-21 11:25:52,412 DEBUG: 36347 -- Updating config hash for ceilometer_agent_central, config_volume=heat_api_cfn hash=e84a4388c67bb2db7836ae48b22ed7e8", > "2018-06-21 11:25:52,412 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,412 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,412 DEBUG: 36347 -- Updating config hash for swift_account_replicator, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-21 11:25:52,412 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-21 11:25:52,412 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-21 11:25:52,412 DEBUG: 36347 -- Updating config hash for aodh_notifier, config_volume=heat_api_cfn hash=50b2e72486b0ea957bb6c2b4de67a283", > "2018-06-21 11:25:52,412 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-21 11:25:52,413 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-21 11:25:52,413 DEBUG: 36347 -- Updating config hash for nova_api_cron, config_volume=heat_api_cfn hash=bdc1e1f03b2049f23cdf4e1606eb96ce", > "2018-06-21 11:25:52,413 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-21 11:25:52,413 DEBUG: 36347 -- Updating config hash for nova_consoleauth, config_volume=heat_api_cfn hash=bdc1e1f03b2049f23cdf4e1606eb96ce", > "2018-06-21 11:25:52,413 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-21 11:25:52,413 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-21 11:25:52,413 DEBUG: 36347 -- Updating config hash for gnocchi_db_sync, config_volume=heat_api_cfn hash=d5d5bb348d5143d33909ba017cca92ca", > "2018-06-21 11:25:52,413 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,413 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,413 DEBUG: 36347 -- Updating config hash for swift_account_reaper, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-21 11:25:52,413 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-21 11:25:52,413 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-21 11:25:52,414 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-06-21 11:25:52,414 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-06-21 11:25:52,414 DEBUG: 36347 -- Updating config hash for ceilometer_agent_notification, config_volume=heat_api_cfn hash=e84a4388c67bb2db7836ae48b22ed7e8-7d867dcfa86cc89e04429e09639526f0", > "2018-06-21 11:25:52,414 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-21 11:25:52,414 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-21 11:25:52,414 DEBUG: 36347 -- Updating config hash for nova_vnc_proxy, config_volume=heat_api_cfn hash=bdc1e1f03b2049f23cdf4e1606eb96ce", > "2018-06-21 11:25:52,414 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,414 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,414 DEBUG: 36347 -- Updating config hash for swift_rsync, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-21 11:25:52,414 DEBUG: 36347 -- Updating config hash for nova_api, config_volume=heat_api_cfn hash=bdc1e1f03b2049f23cdf4e1606eb96ce", > "2018-06-21 11:25:52,414 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-21 11:25:52,415 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-21 11:25:52,415 DEBUG: 36347 -- Updating config hash for aodh_api, config_volume=heat_api_cfn hash=50b2e72486b0ea957bb6c2b4de67a283", > "2018-06-21 11:25:52,415 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-21 11:25:52,415 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-21 11:25:52,415 DEBUG: 36347 -- Updating config hash for nova_metadata, config_volume=heat_api_cfn hash=bdc1e1f03b2049f23cdf4e1606eb96ce", > "2018-06-21 11:25:52,415 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat.md5sum for config_volume /var/lib/config-data/puppet-generated/heat", > "2018-06-21 11:25:52,415 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/heat.md5sum for config_volume /var/lib/config-data/puppet-generated/heat", > "2018-06-21 11:25:52,415 DEBUG: 36347 -- Updating config hash for heat_engine, config_volume=heat_api_cfn hash=d655c8a57cfd8061741545d45e0dbbed", > "2018-06-21 11:25:52,415 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,415 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,415 DEBUG: 36347 -- Updating config hash for swift_container_server, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-21 11:25:52,416 DEBUG: 36347 -- Updating config hash for swift_object_replicator, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-21 11:25:52,416 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-21 11:25:52,416 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-21 11:25:52,416 DEBUG: 36347 -- Updating config hash for neutron_l3_agent, config_volume=heat_api_cfn hash=1458ccfb2d6aca5d6f994c0721e6e0a6", > "2018-06-21 11:25:52,416 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-21 11:25:52,416 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-21 11:25:52,416 DEBUG: 36347 -- Updating config hash for cinder_scheduler, config_volume=heat_api_cfn hash=bb0c656f2b5827c6f76dc6bfff10f6fe", > "2018-06-21 11:25:52,416 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-21 11:25:52,416 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-21 11:25:52,416 DEBUG: 36347 -- Updating config hash for nova_conductor, config_volume=heat_api_cfn hash=bdc1e1f03b2049f23cdf4e1606eb96ce", > "2018-06-21 11:25:52,416 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api_cfn.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api_cfn", > "2018-06-21 11:25:52,416 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api_cfn.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api_cfn", > "2018-06-21 11:25:52,417 DEBUG: 36347 -- Updating config hash for heat_api_cfn, config_volume=heat_api_cfn hash=b0c8716c1fd53673825bc9d9818402bd", > "2018-06-21 11:25:52,417 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-06-21 11:25:52,417 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-06-21 11:25:52,417 DEBUG: 36347 -- Updating config hash for sahara_api, config_volume=heat_api_cfn hash=b6f5b6cd3b26a22dbc1456b85ee3cf24", > "2018-06-21 11:25:52,417 DEBUG: 36347 -- Updating config hash for sahara_engine, config_volume=heat_api_cfn hash=b6f5b6cd3b26a22dbc1456b85ee3cf24", > "2018-06-21 11:25:52,417 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-21 11:25:52,417 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-21 11:25:52,417 DEBUG: 36347 -- Updating config hash for neutron_ovs_agent, config_volume=heat_api_cfn hash=1458ccfb2d6aca5d6f994c0721e6e0a6", > "2018-06-21 11:25:52,417 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-21 11:25:52,417 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-21 11:25:52,417 DEBUG: 36347 -- Updating config hash for cinder_api_cron, config_volume=heat_api_cfn hash=bb0c656f2b5827c6f76dc6bfff10f6fe", > "2018-06-21 11:25:52,418 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,418 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,418 DEBUG: 36347 -- Updating config hash for swift_account_auditor, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-21 11:25:52,418 DEBUG: 36347 -- Updating config hash for swift_container_replicator, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-21 11:25:52,418 DEBUG: 36347 -- Updating config hash for swift_object_updater, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-21 11:25:52,418 DEBUG: 36347 -- Updating config hash for swift_object_expirer, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-21 11:25:52,418 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-06-21 11:25:52,418 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-06-21 11:25:52,419 DEBUG: 36347 -- Updating config hash for heat_api_cron, config_volume=heat_api_cfn hash=7b101c7a1a29ec36db15202c8603168c", > "2018-06-21 11:25:52,419 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,419 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,419 DEBUG: 36347 -- Updating config hash for swift_container_auditor, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-21 11:25:52,419 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-06-21 11:25:52,419 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-06-21 11:25:52,419 DEBUG: 36347 -- Updating config hash for panko_api, config_volume=heat_api_cfn hash=7d867dcfa86cc89e04429e09639526f0", > "2018-06-21 11:25:52,419 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-21 11:25:52,419 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-21 11:25:52,419 DEBUG: 36347 -- Updating config hash for aodh_listener, config_volume=heat_api_cfn hash=50b2e72486b0ea957bb6c2b4de67a283", > "2018-06-21 11:25:52,419 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-21 11:25:52,420 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-21 11:25:52,420 DEBUG: 36347 -- Updating config hash for neutron_api, config_volume=heat_api_cfn hash=1458ccfb2d6aca5d6f994c0721e6e0a6", > "2018-06-21 11:25:52,420 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,420 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-21 11:25:52,420 DEBUG: 36347 -- Updating config hash for swift_account_server, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-21 11:25:52,420 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-06-21 11:25:52,420 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-06-21 11:25:52,420 DEBUG: 36347 -- Updating config hash for glance_api, config_volume=heat_api_cfn hash=ce635a7b60e8e89d9f8a6130e0a31be1", > "2018-06-21 11:25:52,420 DEBUG: 36347 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-21 11:25:52,420 DEBUG: 36347 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-21 11:25:52,420 DEBUG: 36347 -- Updating config hash for logrotate_crond, config_volume=heat_api_cfn hash=51ba9fb4252002c3afc222d4371b55c8" > ] >} >2018-06-21 07:25:53,411 p=23396 u=mistral | TASK [Start containers for step 1] ********************************************* >2018-06-21 07:25:54,112 p=23396 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:25:54,161 p=23396 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:26:22,344 p=23396 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:26:22,371 p=23396 u=mistral | TASK [Debug output for task which failed: Start containers for step 1] ********* >2018-06-21 07:26:22,499 p=23396 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-backup ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-backup", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "5e7b63a88a76: Already exists", > "89c035649aaf: Pulling fs layer", > "89c035649aaf: Verifying Checksum", > "89c035649aaf: Download complete", > "89c035649aaf: Pull complete", > "Digest: sha256:bbd94b3a8477e286264ef2b5660a8c60d872d945e37c6023ae19c6dd09ea156f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-volume ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-volume", > "606ec38d3d26: Pulling fs layer", > "606ec38d3d26: Verifying Checksum", > "606ec38d3d26: Download complete", > "606ec38d3d26: Pull complete", > "Digest: sha256:d4d518ef6aad7c077ff97a0ad1de70ef4074ace3ddde85fdfb70e12e63891ea5", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", > "stdout: ", > "stdout: 2d6a387a4e3e33204887166dbcdf607c6d446dfff2cf39662487b7ffe0059064", > "stdout: Installing MariaDB/MySQL system tables in '/var/lib/mysql' ...", > "OK", > "Filling help tables...", > "Creating OpenGIS required SP-s...", > "To start mysqld at boot time you have to copy", > "support-files/mysql.server to the right place for your system", > "PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !", > "To do so, start the server, then issue the following commands:", > "'/usr/bin/mysqladmin' -u root password 'new-password'", > "'/usr/bin/mysqladmin' -u root -h controller-0 password 'new-password'", > "Alternatively you can run:", > "'/usr/bin/mysql_secure_installation'", > "which will also give you the option of removing the test", > "databases and anonymous user created by default. This is", > "strongly recommended for production servers.", > "See the MariaDB Knowledgebase at http://mariadb.com/kb or the", > "MySQL manual for more instructions.", > "You can start the MariaDB daemon with:", > "cd '/usr' ; /usr/bin/mysqld_safe --datadir='/var/lib/mysql'", > "You can test the MariaDB daemon with mysql-test-run.pl", > "cd '/usr/mysql-test' ; perl mysql-test-run.pl", > "Please report any problems at http://mariadb.org/jira", > "The latest information about MariaDB is available at http://mariadb.org/.", > "You can find additional information about the MySQL part at:", > "http://dev.mysql.com", > "Consider joining MariaDB's strong and vibrant community:", > "https://mariadb.org/get-involved/", > "180621 11:26:13 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.", > "180621 11:26:13 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql", > "spawn mysql_secure_installation", > "NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB", > " SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!", > "In order to log into MariaDB to secure it, we'll need the current", > "password for the root user. If you've just installed MariaDB, and", > "you haven't set the root password yet, the password will be blank,", > "so you should just press enter here.", > "Enter current password for root (enter for none): ", > "OK, successfully used password, moving on...", > "Setting the root password ensures that nobody can log into the MariaDB", > "root user without the proper authorisation.", > "Set root password? [Y/n] y", > "New password: ", > "Re-enter new password: ", > "Password updated successfully!", > "Reloading privilege tables..", > " ... Success!", > "By default, a MariaDB installation has an anonymous user, allowing anyone", > "to log into MariaDB without having to have a user account created for", > "them. This is intended only for testing, and to make the installation", > "go a bit smoother. You should remove them before moving into a", > "production environment.", > "Remove anonymous users? [Y/n] y", > "Normally, root should only be allowed to connect from 'localhost'. This", > "ensures that someone cannot guess at the root password from the network.", > "Disallow root login remotely? [Y/n] n", > " ... skipping.", > "By default, MariaDB comes with a database named 'test' that anyone can", > "access. This is also intended only for testing, and should be removed", > "before moving into a production environment.", > "Remove test database and access to it? [Y/n] y", > " - Dropping test database...", > " - Removing privileges on test database...", > "Reloading the privilege tables will ensure that all changes made so far", > "will take effect immediately.", > "Reload privilege tables now? [Y/n] y", > "Cleaning up...", > "All done! If you've completed all of the above steps, your MariaDB", > "installation should now be secure.", > "Thanks for using MariaDB!", > "180621 11:26:16 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended", > "180621 11:26:17 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.", > "180621 11:26:17 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql", > "mysqld is alive", > "180621 11:26:20 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended", > "stderr: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", > "INFO:__main__:Validating config file", > "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", > "INFO:__main__:Copying service configuration files", > "INFO:__main__:Copying /dev/null to /etc/libqb/force-filesystem-sockets", > "INFO:__main__:Setting permission for /etc/libqb/force-filesystem-sockets", > "INFO:__main__:Deleting /etc/my.cnf.d/galera.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/my.cnf.d/galera.cnf to /etc/my.cnf.d/galera.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/sysconfig/clustercheck to /etc/sysconfig/clustercheck", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/root/.my.cnf to /root/.my.cnf", > "INFO:__main__:Writing out command to execute", > "2018-06-21 11:26:00 140210408048832 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-06-21 11:26:00 140210408048832 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 42 ...", > "2018-06-21 11:26:05 140454829213888 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-06-21 11:26:05 140454829213888 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 71 ...", > "2018-06-21 11:26:09 140678402205888 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-06-21 11:26:09 140678402205888 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 101 ...", > "/usr/bin/mysqld_safe: line 755: ulimit: -1: invalid option", > "ulimit: usage: ulimit [-SHacdefilmnpqrstuvx] [limit]", > "stdout: fc04c0fdb8cb1f612d4e336e5d856e640831069055ae042267ece93d365cc103" > ] >} >2018-06-21 07:26:22,522 p=23396 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-06-21 07:26:22,554 p=23396 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-06-21 07:26:22,577 p=23396 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks1.json exists] ******** >2018-06-21 07:26:23,043 p=23396 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:26:23,080 p=23396 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:26:23,081 p=23396 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-21 07:26:23,105 p=23396 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 1] ******************** >2018-06-21 07:26:23,138 p=23396 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:26:23,166 p=23396 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:26:23,179 p=23396 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-21 07:26:23,202 p=23396 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (bootstrap tasks) for step 1] *** >2018-06-21 07:26:23,230 p=23396 u=mistral | skipping: [controller-0] => {"skip_reason": "Conditional result was False"} >2018-06-21 07:26:23,255 p=23396 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-21 07:26:23,268 p=23396 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-21 07:26:23,273 p=23396 u=mistral | PLAY [External deployment step 2] ********************************************** >2018-06-21 07:26:23,292 p=23396 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-06-21 07:26:23,310 p=23396 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:26:23,327 p=23396 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-06-21 07:26:23,407 p=23396 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >2018-06-21 07:26:23,407 p=23396 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >2018-06-21 07:26:23,408 p=23396 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} >2018-06-21 07:26:23,462 p=23396 u=mistral | TASK [generate inventory] ****************************************************** >2018-06-21 07:26:23,480 p=23396 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:26:23,496 p=23396 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-06-21 07:26:23,515 p=23396 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:26:23,531 p=23396 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-06-21 07:26:23,549 p=23396 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:26:23,565 p=23396 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-06-21 07:26:23,582 p=23396 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:26:23,599 p=23396 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-06-21 07:26:23,615 p=23396 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:26:23,632 p=23396 u=mistral | TASK [generate collect nodes uuid playbook] ************************************ >2018-06-21 07:26:23,648 p=23396 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-21 07:26:23,664 p=23396 u=mistral | TASK [set ceph-ansible verbosity] ********************************************** >2018-06-21 07:26:23,694 p=23396 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_playbook_verbosity": 2}, "changed": false} >2018-06-21 07:26:23,711 p=23396 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-06-21 07:26:23,744 p=23396 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_command": "ANSIBLE_ACTION_PLUGINS=/usr/share/ceph-ansible/plugins/actions/ ANSIBLE_ROLES_PATH=/usr/share/ceph-ansible/roles/ ANSIBLE_LOG_PATH=\"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/ceph_ansible_command.log\" ANSIBLE_LIBRARY=/usr/share/ceph-ansible/library/ ANSIBLE_RETRY_FILES_ENABLED=False ANSIBLE_SSH_RETRIES=3 ANSIBLE_HOST_KEY_CHECKING=False DEFAULT_FORKS=25 ANSIBLE_CONFIG=/usr/share/ceph-ansible/ansible.cfg ansible-playbook --private-key /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ssh_private_key -vv --skip-tags package-install,with_pkg -i /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/inventory.yml --extra-vars @/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/extra_vars.yml"}, "changed": false} >2018-06-21 07:26:23,764 p=23396 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-06-21 07:27:11,548 p=23396 u=mistral | failed: [undercloud] (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": true, "cmd": "ANSIBLE_ACTION_PLUGINS=/usr/share/ceph-ansible/plugins/actions/ ANSIBLE_ROLES_PATH=/usr/share/ceph-ansible/roles/ ANSIBLE_LOG_PATH=\"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/ceph_ansible_command.log\" ANSIBLE_LIBRARY=/usr/share/ceph-ansible/library/ ANSIBLE_RETRY_FILES_ENABLED=False ANSIBLE_SSH_RETRIES=3 ANSIBLE_HOST_KEY_CHECKING=False DEFAULT_FORKS=25 ANSIBLE_CONFIG=/usr/share/ceph-ansible/ansible.cfg ansible-playbook --private-key /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ssh_private_key -vv --skip-tags package-install,with_pkg -i /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/inventory.yml --extra-vars @/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/extra_vars.yml /usr/share/ceph-ansible/site-docker.yml.sample", "delta": "0:00:47.561102", "end": "2018-06-21 07:27:11.491246", "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "msg": "non-zero return code", "rc": 250, "start": "2018-06-21 07:26:23.930144", "stderr": "[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use \n'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. \nThis feature will be removed in a future release. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: docker is kept for backwards compatibility but usage is \ndiscouraged. The module documentation details page may explain more about this \nrationale.. This feature will be removed in a future release. Deprecation \nwarnings can be disabled by setting deprecation_warnings=False in ansible.cfg.\n [WARNING]: Could not match supplied host pattern, ignoring: agents\n [WARNING]: Could not match supplied host pattern, ignoring: mdss\n [WARNING]: Could not match supplied host pattern, ignoring: rgws\n [WARNING]: Could not match supplied host pattern, ignoring: nfss\n [WARNING]: Could not match supplied host pattern, ignoring: restapis\n [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors\n [WARNING]: Could not match supplied host pattern, ignoring: iscsigws\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\nERROR! Unexpected Exception, this is probably a bug: cannot import name to_bytes", "stderr_lines": ["[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use ", "'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. ", "This feature will be removed in a future release. Deprecation warnings can be ", "disabled by setting deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: docker is kept for backwards compatibility but usage is ", "discouraged. The module documentation details page may explain more about this ", "rationale.. This feature will be removed in a future release. Deprecation ", "warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.", " [WARNING]: Could not match supplied host pattern, ignoring: agents", " [WARNING]: Could not match supplied host pattern, ignoring: mdss", " [WARNING]: Could not match supplied host pattern, ignoring: rgws", " [WARNING]: Could not match supplied host pattern, ignoring: nfss", " [WARNING]: Could not match supplied host pattern, ignoring: restapis", " [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors", " [WARNING]: Could not match supplied host pattern, ignoring: iscsigws", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "ERROR! Unexpected Exception, this is probably a bug: cannot import name to_bytes"], "stdout": "ansible-playbook 2.5.4\n config file = /usr/share/ceph-ansible/ansible.cfg\n configured module search path = [u'/usr/share/ceph-ansible/library']\n ansible python module location = /usr/lib/python2.7/site-packages/ansible\n executable location = /usr/bin/ansible-playbook\n python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]\nUsing /usr/share/ceph-ansible/ansible.cfg as config file\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/openstack_config.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/create_mds_filesystems.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/selinux.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/non_containerized.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/containerized.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rgw/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/start_rbd_mirror.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/configure_mirroring.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/selinux.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/start_docker_rbd_mirror.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/start_restapi.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/copy_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/start_docker_restapi.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml\n\nPLAYBOOK: site-docker.yml.sample ***********************************************\n12 plays in /usr/share/ceph-ansible/site-docker.yml.sample\n\nPLAY [mons,agents,osds,mdss,rgws,nfss,restapis,rbdmirrors,clients,iscsigws,mgrs] ***\n\nTASK [gather facts] ************************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:24\nThursday 21 June 2018 07:26:26 -0400 (0:00:00.138) 0:00:00.138 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nok: [compute-0]\n\nTASK [gather and delegate facts] ***********************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:29\nThursday 21 June 2018 07:26:29 -0400 (0:00:03.252) 0:00:03.390 ********* \nok: [compute-0 -> 192.168.24.8] => (item=controller-0)\nok: [ceph-0 -> 192.168.24.8] => (item=controller-0)\nok: [controller-0 -> 192.168.24.8] => (item=controller-0)\nok: [controller-0 -> 192.168.24.10] => (item=ceph-0)\nok: [compute-0 -> 192.168.24.10] => (item=ceph-0)\nok: [ceph-0 -> 192.168.24.10] => (item=ceph-0)\n\nTASK [check if it is atomic host] **********************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:37\nThursday 21 June 2018 07:26:36 -0400 (0:00:07.048) 0:00:10.439 ********* \nok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\nok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [set_fact is_atomic] ******************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:44\nThursday 21 June 2018 07:26:37 -0400 (0:00:00.738) 0:00:11.178 ********* \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nTASK [pull rhceph image] *******************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:65\nThursday 21 June 2018 07:26:37 -0400 (0:00:00.156) 0:00:11.335 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [set ceph monitor install 'In Progress'] **********************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:75\nThursday 21 June 2018 07:26:37 -0400 (0:00:00.112) 0:00:11.447 ********* \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"start\": \"20180621072637Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nThursday 21 June 2018 07:26:37 -0400 (0:00:00.162) 0:00:11.610 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.028599\", \"end\": \"2018-06-21 11:26:38.891884\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-21 11:26:38.863285\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nThursday 21 June 2018 07:26:38 -0400 (0:00:00.636) 0:00:12.246 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nThursday 21 June 2018 07:26:38 -0400 (0:00:00.048) 0:00:12.294 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nThursday 21 June 2018 07:26:38 -0400 (0:00:00.046) 0:00:12.341 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nThursday 21 June 2018 07:26:38 -0400 (0:00:00.043) 0:00:12.384 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.030365\", \"end\": \"2018-06-21 11:26:39.572421\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-21 11:26:39.542056\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.544) 0:00:12.928 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.045) 0:00:12.974 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.045) 0:00:13.019 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.049) 0:00:13.069 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.045) 0:00:13.115 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.044) 0:00:13.159 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.043) 0:00:13.203 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.043) 0:00:13.246 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.047) 0:00:13.294 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.046) 0:00:13.340 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.046) 0:00:13.387 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.053) 0:00:13.441 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.047) 0:00:13.488 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.045) 0:00:13.534 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.045) 0:00:13.580 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.044) 0:00:13.624 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.047) 0:00:13.671 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.045) 0:00:13.717 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nThursday 21 June 2018 07:26:39 -0400 (0:00:00.042) 0:00:13.760 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nThursday 21 June 2018 07:26:40 -0400 (0:00:00.046) 0:00:13.806 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nThursday 21 June 2018 07:26:40 -0400 (0:00:00.047) 0:00:13.854 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nThursday 21 June 2018 07:26:40 -0400 (0:00:00.062) 0:00:13.917 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nThursday 21 June 2018 07:26:40 -0400 (0:00:00.048) 0:00:13.966 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nThursday 21 June 2018 07:26:40 -0400 (0:00:00.047) 0:00:14.013 ********* \nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nThursday 21 June 2018 07:26:40 -0400 (0:00:00.501) 0:00:14.514 ********* \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nThursday 21 June 2018 07:26:40 -0400 (0:00:00.073) 0:00:14.588 ********* \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nThursday 21 June 2018 07:26:40 -0400 (0:00:00.077) 0:00:14.665 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nThursday 21 June 2018 07:26:40 -0400 (0:00:00.067) 0:00:14.732 ********* \nok: [controller-0 -> 192.168.24.8] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nThursday 21 June 2018 07:26:41 -0400 (0:00:00.130) 0:00:14.863 ********* \nok: [controller-0 -> 192.168.24.8] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.032773\", \"end\": \"2018-06-21 11:26:42.150915\", \"failed_when_result\": false, \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2018-06-21 11:26:42.118142\", \"stderr\": \"Error response from daemon: No such container: ceph-mon-controller-0\", \"stderr_lines\": [\"Error response from daemon: No such container: ceph-mon-controller-0\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check if /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nThursday 21 June 2018 07:26:41 -0400 (0:00:00.646) 0:00:15.509 ********* \nok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nThursday 21 June 2018 07:26:41 -0400 (0:00:00.186) 0:00:15.696 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nThursday 21 June 2018 07:26:41 -0400 (0:00:00.051) 0:00:15.747 ********* \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 988}\n\nTASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nThursday 21 June 2018 07:26:42 -0400 (0:00:00.411) 0:00:16.159 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nThursday 21 June 2018 07:26:42 -0400 (0:00:00.167) 0:00:16.327 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85\nThursday 21 June 2018 07:26:42 -0400 (0:00:00.070) 0:00:16.398 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96\nThursday 21 June 2018 07:26:42 -0400 (0:00:00.046) 0:00:16.444 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105\nThursday 21 June 2018 07:26:42 -0400 (0:00:00.049) 0:00:16.494 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117\nThursday 21 June 2018 07:26:42 -0400 (0:00:00.041) 0:00:16.535 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123\nThursday 21 June 2018 07:26:42 -0400 (0:00:00.043) 0:00:16.579 ********* \nok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129\nThursday 21 June 2018 07:26:42 -0400 (0:00:00.068) 0:00:16.648 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135\nThursday 21 June 2018 07:26:42 -0400 (0:00:00.040) 0:00:16.688 ********* \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nThursday 21 June 2018 07:26:42 -0400 (0:00:00.070) 0:00:16.758 ********* \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nThursday 21 June 2018 07:26:43 -0400 (0:00:00.080) 0:00:16.839 ********* \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nThursday 21 June 2018 07:26:43 -0400 (0:00:00.080) 0:00:16.920 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166\nThursday 21 June 2018 07:26:43 -0400 (0:00:00.048) 0:00:16.968 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175\nThursday 21 June 2018 07:26:43 -0400 (0:00:00.046) 0:00:17.015 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for Debian based system] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183\nThursday 21 June 2018 07:26:43 -0400 (0:00:00.048) 0:00:17.063 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for Red Hat based system] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nThursday 21 June 2018 07:26:43 -0400 (0:00:00.048) 0:00:17.112 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for Red Hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nThursday 21 June 2018 07:26:43 -0400 (0:00:00.049) 0:00:17.161 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : check if selinux is enabled] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nThursday 21 June 2018 07:26:43 -0400 (0:00:00.072) 0:00:17.233 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"getenforce\"], \"delta\": \"0:00:00.003198\", \"end\": \"2018-06-21 11:26:44.393654\", \"rc\": 0, \"start\": \"2018-06-21 11:26:44.390456\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Enforcing\", \"stdout_lines\": [\"Enforcing\"]}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nThursday 21 June 2018 07:26:43 -0400 (0:00:00.511) 0:00:17.745 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nThursday 21 June 2018 07:26:44 -0400 (0:00:00.049) 0:00:17.794 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nThursday 21 June 2018 07:26:44 -0400 (0:00:00.055) 0:00:17.850 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nThursday 21 June 2018 07:26:44 -0400 (0:00:00.044) 0:00:17.895 ********* \nok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nThursday 21 June 2018 07:26:45 -0400 (0:00:00.941) 0:00:18.836 ********* \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nThursday 21 June 2018 07:26:45 -0400 (0:00:00.078) 0:00:18.915 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nThursday 21 June 2018 07:26:45 -0400 (0:00:00.042) 0:00:18.958 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.026935\", \"end\": \"2018-06-21 11:26:46.126835\", \"rc\": 0, \"start\": \"2018-06-21 11:26:46.099900\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nThursday 21 June 2018 07:26:45 -0400 (0:00:00.520) 0:00:19.479 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nThursday 21 June 2018 07:26:45 -0400 (0:00:00.074) 0:00:19.553 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.026949\", \"end\": \"2018-06-21 11:26:46.730191\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-21 11:26:46.703242\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nThursday 21 June 2018 07:26:46 -0400 (0:00:00.529) 0:00:20.083 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nThursday 21 June 2018 07:26:46 -0400 (0:00:00.088) 0:00:20.171 ********* \nok: [controller-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nThursday 21 June 2018 07:26:46 -0400 (0:00:00.133) 0:00:20.305 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nThursday 21 June 2018 07:26:46 -0400 (0:00:00.088) 0:00:20.393 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nThursday 21 June 2018 07:26:46 -0400 (0:00:00.093) 0:00:20.486 ********* \nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nThursday 21 June 2018 07:26:47 -0400 (0:00:01.261) 0:00:21.748 ********* \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nThursday 21 June 2018 07:26:48 -0400 (0:00:00.258) 0:00:22.006 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nThursday 21 June 2018 07:26:48 -0400 (0:00:00.040) 0:00:22.046 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nThursday 21 June 2018 07:26:48 -0400 (0:00:00.039) 0:00:22.086 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nThursday 21 June 2018 07:26:48 -0400 (0:00:00.045) 0:00:22.131 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nThursday 21 June 2018 07:26:48 -0400 (0:00:00.043) 0:00:22.174 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nThursday 21 June 2018 07:26:48 -0400 (0:00:00.043) 0:00:22.217 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nThursday 21 June 2018 07:26:48 -0400 (0:00:00.048) 0:00:22.265 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nThursday 21 June 2018 07:26:48 -0400 (0:00:00.045) 0:00:22.311 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nThursday 21 June 2018 07:26:48 -0400 (0:00:00.041) 0:00:22.353 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nThursday 21 June 2018 07:26:48 -0400 (0:00:00.048) 0:00:22.401 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nThursday 21 June 2018 07:26:48 -0400 (0:00:00.042) 0:00:22.444 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nThursday 21 June 2018 07:26:48 -0400 (0:00:00.042) 0:00:22.486 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nThursday 21 June 2018 07:26:48 -0400 (0:00:00.049) 0:00:22.535 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nThursday 21 June 2018 07:26:48 -0400 (0:00:00.048) 0:00:22.583 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nThursday 21 June 2018 07:26:48 -0400 (0:00:00.041) 0:00:22.625 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nThursday 21 June 2018 07:26:48 -0400 (0:00:00.041) 0:00:22.666 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nThursday 21 June 2018 07:26:48 -0400 (0:00:00.046) 0:00:22.712 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nThursday 21 June 2018 07:26:48 -0400 (0:00:00.043) 0:00:22.756 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nThursday 21 June 2018 07:26:49 -0400 (0:00:00.202) 0:00:22.958 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nThursday 21 June 2018 07:26:49 -0400 (0:00:00.044) 0:00:23.003 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nThursday 21 June 2018 07:26:49 -0400 (0:00:00.047) 0:00:23.051 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nThursday 21 June 2018 07:26:49 -0400 (0:00:00.042) 0:00:23.094 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nThursday 21 June 2018 07:26:49 -0400 (0:00:00.043) 0:00:23.137 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nThursday 21 June 2018 07:26:49 -0400 (0:00:00.049) 0:00:23.187 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nThursday 21 June 2018 07:26:49 -0400 (0:00:00.044) 0:00:23.232 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nThursday 21 June 2018 07:26:49 -0400 (0:00:00.042) 0:00:23.274 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nThursday 21 June 2018 07:26:49 -0400 (0:00:00.041) 0:00:23.316 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nThursday 21 June 2018 07:26:49 -0400 (0:00:00.045) 0:00:23.361 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nThursday 21 June 2018 07:26:49 -0400 (0:00:00.043) 0:00:23.405 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nThursday 21 June 2018 07:26:49 -0400 (0:00:00.044) 0:00:23.450 ********* \nok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:16.073521\", \"end\": \"2018-06-21 11:27:06.680953\", \"rc\": 0, \"start\": \"2018-06-21 11:26:50.607432\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\n9a32f102e677: Pulling fs layer\\nb8aa42cec17a: Pulling fs layer\\nf00cbf28d025: Pulling fs layer\\nb8aa42cec17a: Verifying Checksum\\nb8aa42cec17a: Download complete\\n9a32f102e677: Verifying Checksum\\n9a32f102e677: Download complete\\nf00cbf28d025: Verifying Checksum\\nf00cbf28d025: Download complete\\n9a32f102e677: Pull complete\\nb8aa42cec17a: Pull complete\\nf00cbf28d025: Pull complete\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"9a32f102e677: Pulling fs layer\", \"b8aa42cec17a: Pulling fs layer\", \"f00cbf28d025: Pulling fs layer\", \"b8aa42cec17a: Verifying Checksum\", \"b8aa42cec17a: Download complete\", \"9a32f102e677: Verifying Checksum\", \"9a32f102e677: Download complete\", \"f00cbf28d025: Verifying Checksum\", \"f00cbf28d025: Download complete\", \"9a32f102e677: Pull complete\", \"b8aa42cec17a: Pull complete\", \"f00cbf28d025: Pull complete\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nThursday 21 June 2018 07:27:06 -0400 (0:00:16.593) 0:00:40.043 ********* \nchanged: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.031216\", \"end\": \"2018-06-21 11:27:07.238627\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-21 11:27:07.207411\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/696ac269be758974f6c72bf29ba66b56c03062d89e599b005ebee5886bb72a9a/diff:/var/lib/docker/overlay2/c36cae282a1b52fedcca6a9bc45844daf9228a3b8a09850e358e68b6dbfb1705/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/07babd0336520c724aa5fb2df6751048795134fe786dad2bd33e1284d3a256eb/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/07babd0336520c724aa5fb2df6751048795134fe786dad2bd33e1284d3a256eb/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/07babd0336520c724aa5fb2df6751048795134fe786dad2bd33e1284d3a256eb/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/696ac269be758974f6c72bf29ba66b56c03062d89e599b005ebee5886bb72a9a/diff:/var/lib/docker/overlay2/c36cae282a1b52fedcca6a9bc45844daf9228a3b8a09850e358e68b6dbfb1705/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/07babd0336520c724aa5fb2df6751048795134fe786dad2bd33e1284d3a256eb/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/07babd0336520c724aa5fb2df6751048795134fe786dad2bd33e1284d3a256eb/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/07babd0336520c724aa5fb2df6751048795134fe786dad2bd33e1284d3a256eb/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nThursday 21 June 2018 07:27:06 -0400 (0:00:00.563) 0:00:40.607 ********* \nok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nThursday 21 June 2018 07:27:06 -0400 (0:00:00.081) 0:00:40.689 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nThursday 21 June 2018 07:27:06 -0400 (0:00:00.054) 0:00:40.743 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nThursday 21 June 2018 07:27:07 -0400 (0:00:00.046) 0:00:40.789 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nThursday 21 June 2018 07:27:07 -0400 (0:00:00.044) 0:00:40.834 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nThursday 21 June 2018 07:27:07 -0400 (0:00:00.045) 0:00:40.879 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nThursday 21 June 2018 07:27:07 -0400 (0:00:00.046) 0:00:40.926 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nThursday 21 June 2018 07:27:07 -0400 (0:00:00.043) 0:00:40.969 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nThursday 21 June 2018 07:27:07 -0400 (0:00:00.053) 0:00:41.023 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nThursday 21 June 2018 07:27:07 -0400 (0:00:00.046) 0:00:41.069 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nThursday 21 June 2018 07:27:07 -0400 (0:00:00.042) 0:00:41.112 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nThursday 21 June 2018 07:27:07 -0400 (0:00:00.043) 0:00:41.156 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nThursday 21 June 2018 07:27:07 -0400 (0:00:00.043) 0:00:41.199 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.634430\", \"end\": \"2018-06-21 11:27:09.006065\", \"rc\": 0, \"start\": \"2018-06-21 11:27:08.371635\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nThursday 21 June 2018 07:27:08 -0400 (0:00:01.169) 0:00:42.369 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nThursday 21 June 2018 07:27:08 -0400 (0:00:00.096) 0:00:42.465 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nThursday 21 June 2018 07:27:08 -0400 (0:00:00.064) 0:00:42.530 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nThursday 21 June 2018 07:27:08 -0400 (0:00:00.068) 0:00:42.598 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nThursday 21 June 2018 07:27:08 -0400 (0:00:00.105) 0:00:42.704 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nThursday 21 June 2018 07:27:08 -0400 (0:00:00.057) 0:00:42.762 ********* \nchanged: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nThursday 21 June 2018 07:27:11 -0400 (0:00:02.363) 0:00:45.125 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nto see the full traceback, use -vvv", "stdout_lines": ["ansible-playbook 2.5.4", " config file = /usr/share/ceph-ansible/ansible.cfg", " configured module search path = [u'/usr/share/ceph-ansible/library']", " ansible python module location = /usr/lib/python2.7/site-packages/ansible", " executable location = /usr/bin/ansible-playbook", " python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]", "Using /usr/share/ceph-ansible/ansible.cfg as config file", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/openstack_config.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/create_mds_filesystems.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/selinux.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/non_containerized.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/containerized.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rgw/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/start_rbd_mirror.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/configure_mirroring.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/selinux.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/start_docker_rbd_mirror.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/start_restapi.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/copy_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/start_docker_restapi.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml", "", "PLAYBOOK: site-docker.yml.sample ***********************************************", "12 plays in /usr/share/ceph-ansible/site-docker.yml.sample", "", "PLAY [mons,agents,osds,mdss,rgws,nfss,restapis,rbdmirrors,clients,iscsigws,mgrs] ***", "", "TASK [gather facts] ************************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:24", "Thursday 21 June 2018 07:26:26 -0400 (0:00:00.138) 0:00:00.138 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "ok: [compute-0]", "", "TASK [gather and delegate facts] ***********************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:29", "Thursday 21 June 2018 07:26:29 -0400 (0:00:03.252) 0:00:03.390 ********* ", "ok: [compute-0 -> 192.168.24.8] => (item=controller-0)", "ok: [ceph-0 -> 192.168.24.8] => (item=controller-0)", "ok: [controller-0 -> 192.168.24.8] => (item=controller-0)", "ok: [controller-0 -> 192.168.24.10] => (item=ceph-0)", "ok: [compute-0 -> 192.168.24.10] => (item=ceph-0)", "ok: [ceph-0 -> 192.168.24.10] => (item=ceph-0)", "", "TASK [check if it is atomic host] **********************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:37", "Thursday 21 June 2018 07:26:36 -0400 (0:00:07.048) 0:00:10.439 ********* ", "ok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "ok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [set_fact is_atomic] ******************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:44", "Thursday 21 June 2018 07:26:37 -0400 (0:00:00.738) 0:00:11.178 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "ok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "ok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "TASK [pull rhceph image] *******************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:65", "Thursday 21 June 2018 07:26:37 -0400 (0:00:00.156) 0:00:11.335 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [set ceph monitor install 'In Progress'] **********************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:75", "Thursday 21 June 2018 07:26:37 -0400 (0:00:00.112) 0:00:11.447 ********* ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"start\": \"20180621072637Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Thursday 21 June 2018 07:26:37 -0400 (0:00:00.162) 0:00:11.610 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.028599\", \"end\": \"2018-06-21 11:26:38.891884\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-21 11:26:38.863285\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Thursday 21 June 2018 07:26:38 -0400 (0:00:00.636) 0:00:12.246 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Thursday 21 June 2018 07:26:38 -0400 (0:00:00.048) 0:00:12.294 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Thursday 21 June 2018 07:26:38 -0400 (0:00:00.046) 0:00:12.341 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Thursday 21 June 2018 07:26:38 -0400 (0:00:00.043) 0:00:12.384 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.030365\", \"end\": \"2018-06-21 11:26:39.572421\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-21 11:26:39.542056\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.544) 0:00:12.928 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.045) 0:00:12.974 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.045) 0:00:13.019 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.049) 0:00:13.069 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.045) 0:00:13.115 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.044) 0:00:13.159 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.043) 0:00:13.203 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.043) 0:00:13.246 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.047) 0:00:13.294 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.046) 0:00:13.340 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.046) 0:00:13.387 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.053) 0:00:13.441 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.047) 0:00:13.488 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.045) 0:00:13.534 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.045) 0:00:13.580 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.044) 0:00:13.624 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.047) 0:00:13.671 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.045) 0:00:13.717 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Thursday 21 June 2018 07:26:39 -0400 (0:00:00.042) 0:00:13.760 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Thursday 21 June 2018 07:26:40 -0400 (0:00:00.046) 0:00:13.806 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Thursday 21 June 2018 07:26:40 -0400 (0:00:00.047) 0:00:13.854 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Thursday 21 June 2018 07:26:40 -0400 (0:00:00.062) 0:00:13.917 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Thursday 21 June 2018 07:26:40 -0400 (0:00:00.048) 0:00:13.966 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Thursday 21 June 2018 07:26:40 -0400 (0:00:00.047) 0:00:14.013 ********* ", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Thursday 21 June 2018 07:26:40 -0400 (0:00:00.501) 0:00:14.514 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Thursday 21 June 2018 07:26:40 -0400 (0:00:00.073) 0:00:14.588 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Thursday 21 June 2018 07:26:40 -0400 (0:00:00.077) 0:00:14.665 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Thursday 21 June 2018 07:26:40 -0400 (0:00:00.067) 0:00:14.732 ********* ", "ok: [controller-0 -> 192.168.24.8] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Thursday 21 June 2018 07:26:41 -0400 (0:00:00.130) 0:00:14.863 ********* ", "ok: [controller-0 -> 192.168.24.8] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.032773\", \"end\": \"2018-06-21 11:26:42.150915\", \"failed_when_result\": false, \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2018-06-21 11:26:42.118142\", \"stderr\": \"Error response from daemon: No such container: ceph-mon-controller-0\", \"stderr_lines\": [\"Error response from daemon: No such container: ceph-mon-controller-0\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check if /var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Thursday 21 June 2018 07:26:41 -0400 (0:00:00.646) 0:00:15.509 ********* ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Thursday 21 June 2018 07:26:41 -0400 (0:00:00.186) 0:00:15.696 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Thursday 21 June 2018 07:26:41 -0400 (0:00:00.051) 0:00:15.747 ********* ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 988}", "", "TASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Thursday 21 June 2018 07:26:42 -0400 (0:00:00.411) 0:00:16.159 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Thursday 21 June 2018 07:26:42 -0400 (0:00:00.167) 0:00:16.327 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85", "Thursday 21 June 2018 07:26:42 -0400 (0:00:00.070) 0:00:16.398 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96", "Thursday 21 June 2018 07:26:42 -0400 (0:00:00.046) 0:00:16.444 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105", "Thursday 21 June 2018 07:26:42 -0400 (0:00:00.049) 0:00:16.494 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117", "Thursday 21 June 2018 07:26:42 -0400 (0:00:00.041) 0:00:16.535 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123", "Thursday 21 June 2018 07:26:42 -0400 (0:00:00.043) 0:00:16.579 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129", "Thursday 21 June 2018 07:26:42 -0400 (0:00:00.068) 0:00:16.648 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135", "Thursday 21 June 2018 07:26:42 -0400 (0:00:00.040) 0:00:16.688 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Thursday 21 June 2018 07:26:42 -0400 (0:00:00.070) 0:00:16.758 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Thursday 21 June 2018 07:26:43 -0400 (0:00:00.080) 0:00:16.839 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Thursday 21 June 2018 07:26:43 -0400 (0:00:00.080) 0:00:16.920 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166", "Thursday 21 June 2018 07:26:43 -0400 (0:00:00.048) 0:00:16.968 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175", "Thursday 21 June 2018 07:26:43 -0400 (0:00:00.046) 0:00:17.015 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for Debian based system] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183", "Thursday 21 June 2018 07:26:43 -0400 (0:00:00.048) 0:00:17.063 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for Red Hat based system] **************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Thursday 21 June 2018 07:26:43 -0400 (0:00:00.048) 0:00:17.112 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for Red Hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Thursday 21 June 2018 07:26:43 -0400 (0:00:00.049) 0:00:17.161 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : check if selinux is enabled] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Thursday 21 June 2018 07:26:43 -0400 (0:00:00.072) 0:00:17.233 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"getenforce\"], \"delta\": \"0:00:00.003198\", \"end\": \"2018-06-21 11:26:44.393654\", \"rc\": 0, \"start\": \"2018-06-21 11:26:44.390456\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Enforcing\", \"stdout_lines\": [\"Enforcing\"]}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Thursday 21 June 2018 07:26:43 -0400 (0:00:00.511) 0:00:17.745 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Thursday 21 June 2018 07:26:44 -0400 (0:00:00.049) 0:00:17.794 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Thursday 21 June 2018 07:26:44 -0400 (0:00:00.055) 0:00:17.850 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Thursday 21 June 2018 07:26:44 -0400 (0:00:00.044) 0:00:17.895 ********* ", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Thursday 21 June 2018 07:26:45 -0400 (0:00:00.941) 0:00:18.836 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Thursday 21 June 2018 07:26:45 -0400 (0:00:00.078) 0:00:18.915 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Thursday 21 June 2018 07:26:45 -0400 (0:00:00.042) 0:00:18.958 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.026935\", \"end\": \"2018-06-21 11:26:46.126835\", \"rc\": 0, \"start\": \"2018-06-21 11:26:46.099900\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Thursday 21 June 2018 07:26:45 -0400 (0:00:00.520) 0:00:19.479 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Thursday 21 June 2018 07:26:45 -0400 (0:00:00.074) 0:00:19.553 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.026949\", \"end\": \"2018-06-21 11:26:46.730191\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-21 11:26:46.703242\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Thursday 21 June 2018 07:26:46 -0400 (0:00:00.529) 0:00:20.083 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Thursday 21 June 2018 07:26:46 -0400 (0:00:00.088) 0:00:20.171 ********* ", "ok: [controller-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Thursday 21 June 2018 07:26:46 -0400 (0:00:00.133) 0:00:20.305 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Thursday 21 June 2018 07:26:46 -0400 (0:00:00.088) 0:00:20.393 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Thursday 21 June 2018 07:26:46 -0400 (0:00:00.093) 0:00:20.486 ********* ", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Thursday 21 June 2018 07:26:47 -0400 (0:00:01.261) 0:00:21.748 ********* ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/2df872c0-03f8-4f54-928a-3e66a6fe858b/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Thursday 21 June 2018 07:26:48 -0400 (0:00:00.258) 0:00:22.006 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Thursday 21 June 2018 07:26:48 -0400 (0:00:00.040) 0:00:22.046 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Thursday 21 June 2018 07:26:48 -0400 (0:00:00.039) 0:00:22.086 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Thursday 21 June 2018 07:26:48 -0400 (0:00:00.045) 0:00:22.131 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Thursday 21 June 2018 07:26:48 -0400 (0:00:00.043) 0:00:22.174 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Thursday 21 June 2018 07:26:48 -0400 (0:00:00.043) 0:00:22.217 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Thursday 21 June 2018 07:26:48 -0400 (0:00:00.048) 0:00:22.265 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Thursday 21 June 2018 07:26:48 -0400 (0:00:00.045) 0:00:22.311 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Thursday 21 June 2018 07:26:48 -0400 (0:00:00.041) 0:00:22.353 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Thursday 21 June 2018 07:26:48 -0400 (0:00:00.048) 0:00:22.401 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Thursday 21 June 2018 07:26:48 -0400 (0:00:00.042) 0:00:22.444 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Thursday 21 June 2018 07:26:48 -0400 (0:00:00.042) 0:00:22.486 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Thursday 21 June 2018 07:26:48 -0400 (0:00:00.049) 0:00:22.535 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Thursday 21 June 2018 07:26:48 -0400 (0:00:00.048) 0:00:22.583 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Thursday 21 June 2018 07:26:48 -0400 (0:00:00.041) 0:00:22.625 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Thursday 21 June 2018 07:26:48 -0400 (0:00:00.041) 0:00:22.666 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Thursday 21 June 2018 07:26:48 -0400 (0:00:00.046) 0:00:22.712 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Thursday 21 June 2018 07:26:48 -0400 (0:00:00.043) 0:00:22.756 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Thursday 21 June 2018 07:26:49 -0400 (0:00:00.202) 0:00:22.958 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Thursday 21 June 2018 07:26:49 -0400 (0:00:00.044) 0:00:23.003 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Thursday 21 June 2018 07:26:49 -0400 (0:00:00.047) 0:00:23.051 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Thursday 21 June 2018 07:26:49 -0400 (0:00:00.042) 0:00:23.094 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Thursday 21 June 2018 07:26:49 -0400 (0:00:00.043) 0:00:23.137 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Thursday 21 June 2018 07:26:49 -0400 (0:00:00.049) 0:00:23.187 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Thursday 21 June 2018 07:26:49 -0400 (0:00:00.044) 0:00:23.232 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Thursday 21 June 2018 07:26:49 -0400 (0:00:00.042) 0:00:23.274 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Thursday 21 June 2018 07:26:49 -0400 (0:00:00.041) 0:00:23.316 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Thursday 21 June 2018 07:26:49 -0400 (0:00:00.045) 0:00:23.361 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Thursday 21 June 2018 07:26:49 -0400 (0:00:00.043) 0:00:23.405 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Thursday 21 June 2018 07:26:49 -0400 (0:00:00.044) 0:00:23.450 ********* ", "ok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:16.073521\", \"end\": \"2018-06-21 11:27:06.680953\", \"rc\": 0, \"start\": \"2018-06-21 11:26:50.607432\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\n9a32f102e677: Pulling fs layer\\nb8aa42cec17a: Pulling fs layer\\nf00cbf28d025: Pulling fs layer\\nb8aa42cec17a: Verifying Checksum\\nb8aa42cec17a: Download complete\\n9a32f102e677: Verifying Checksum\\n9a32f102e677: Download complete\\nf00cbf28d025: Verifying Checksum\\nf00cbf28d025: Download complete\\n9a32f102e677: Pull complete\\nb8aa42cec17a: Pull complete\\nf00cbf28d025: Pull complete\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"9a32f102e677: Pulling fs layer\", \"b8aa42cec17a: Pulling fs layer\", \"f00cbf28d025: Pulling fs layer\", \"b8aa42cec17a: Verifying Checksum\", \"b8aa42cec17a: Download complete\", \"9a32f102e677: Verifying Checksum\", \"9a32f102e677: Download complete\", \"f00cbf28d025: Verifying Checksum\", \"f00cbf28d025: Download complete\", \"9a32f102e677: Pull complete\", \"b8aa42cec17a: Pull complete\", \"f00cbf28d025: Pull complete\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Thursday 21 June 2018 07:27:06 -0400 (0:00:16.593) 0:00:40.043 ********* ", "changed: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.031216\", \"end\": \"2018-06-21 11:27:07.238627\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-21 11:27:07.207411\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/696ac269be758974f6c72bf29ba66b56c03062d89e599b005ebee5886bb72a9a/diff:/var/lib/docker/overlay2/c36cae282a1b52fedcca6a9bc45844daf9228a3b8a09850e358e68b6dbfb1705/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/07babd0336520c724aa5fb2df6751048795134fe786dad2bd33e1284d3a256eb/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/07babd0336520c724aa5fb2df6751048795134fe786dad2bd33e1284d3a256eb/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/07babd0336520c724aa5fb2df6751048795134fe786dad2bd33e1284d3a256eb/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/696ac269be758974f6c72bf29ba66b56c03062d89e599b005ebee5886bb72a9a/diff:/var/lib/docker/overlay2/c36cae282a1b52fedcca6a9bc45844daf9228a3b8a09850e358e68b6dbfb1705/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/07babd0336520c724aa5fb2df6751048795134fe786dad2bd33e1284d3a256eb/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/07babd0336520c724aa5fb2df6751048795134fe786dad2bd33e1284d3a256eb/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/07babd0336520c724aa5fb2df6751048795134fe786dad2bd33e1284d3a256eb/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Thursday 21 June 2018 07:27:06 -0400 (0:00:00.563) 0:00:40.607 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Thursday 21 June 2018 07:27:06 -0400 (0:00:00.081) 0:00:40.689 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Thursday 21 June 2018 07:27:06 -0400 (0:00:00.054) 0:00:40.743 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Thursday 21 June 2018 07:27:07 -0400 (0:00:00.046) 0:00:40.789 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Thursday 21 June 2018 07:27:07 -0400 (0:00:00.044) 0:00:40.834 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Thursday 21 June 2018 07:27:07 -0400 (0:00:00.045) 0:00:40.879 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Thursday 21 June 2018 07:27:07 -0400 (0:00:00.046) 0:00:40.926 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Thursday 21 June 2018 07:27:07 -0400 (0:00:00.043) 0:00:40.969 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Thursday 21 June 2018 07:27:07 -0400 (0:00:00.053) 0:00:41.023 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Thursday 21 June 2018 07:27:07 -0400 (0:00:00.046) 0:00:41.069 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Thursday 21 June 2018 07:27:07 -0400 (0:00:00.042) 0:00:41.112 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Thursday 21 June 2018 07:27:07 -0400 (0:00:00.043) 0:00:41.156 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Thursday 21 June 2018 07:27:07 -0400 (0:00:00.043) 0:00:41.199 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.634430\", \"end\": \"2018-06-21 11:27:09.006065\", \"rc\": 0, \"start\": \"2018-06-21 11:27:08.371635\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Thursday 21 June 2018 07:27:08 -0400 (0:00:01.169) 0:00:42.369 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Thursday 21 June 2018 07:27:08 -0400 (0:00:00.096) 0:00:42.465 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Thursday 21 June 2018 07:27:08 -0400 (0:00:00.064) 0:00:42.530 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Thursday 21 June 2018 07:27:08 -0400 (0:00:00.068) 0:00:42.598 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Thursday 21 June 2018 07:27:08 -0400 (0:00:00.105) 0:00:42.704 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Thursday 21 June 2018 07:27:08 -0400 (0:00:00.057) 0:00:42.762 ********* ", "changed: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Thursday 21 June 2018 07:27:11 -0400 (0:00:02.363) 0:00:45.125 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "to see the full traceback, use -vvv"]} >2018-06-21 07:27:11,553 p=23396 u=mistral | NO MORE HOSTS LEFT ************************************************************* >2018-06-21 07:27:11,554 p=23396 u=mistral | PLAY RECAP ********************************************************************* >2018-06-21 07:27:11,554 p=23396 u=mistral | ceph-0 : ok=87 changed=41 unreachable=0 failed=0 >2018-06-21 07:27:11,554 p=23396 u=mistral | compute-0 : ok=105 changed=43 unreachable=0 failed=0 >2018-06-21 07:27:11,554 p=23396 u=mistral | controller-0 : ok=146 changed=44 unreachable=0 failed=0 >2018-06-21 07:27:11,554 p=23396 u=mistral | undercloud : ok=20 changed=9 unreachable=0 failed=1
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1594169
: 1453680